2023-07-18 02:14:42,568 DEBUG [main] hbase.HBaseTestingUtility(342): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d9e1427-d9ef-a78e-d989-6465a7eb0c3a 2023-07-18 02:14:42,586 INFO [main] hbase.HBaseClassTestRule(94): Test class org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1 timeout: 13 mins 2023-07-18 02:14:42,612 INFO [Time-limited test] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-18 02:14:42,612 INFO [Time-limited test] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d9e1427-d9ef-a78e-d989-6465a7eb0c3a/cluster_114a01d3-d950-74e3-9098-0eab13676d5a, deleteOnExit=true 2023-07-18 02:14:42,612 INFO [Time-limited test] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-18 02:14:42,613 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d9e1427-d9ef-a78e-d989-6465a7eb0c3a/test.cache.data in system properties and HBase conf 2023-07-18 02:14:42,614 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d9e1427-d9ef-a78e-d989-6465a7eb0c3a/hadoop.tmp.dir in system properties and HBase conf 2023-07-18 02:14:42,614 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d9e1427-d9ef-a78e-d989-6465a7eb0c3a/hadoop.log.dir in system properties and HBase conf 2023-07-18 02:14:42,615 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d9e1427-d9ef-a78e-d989-6465a7eb0c3a/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-18 02:14:42,615 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d9e1427-d9ef-a78e-d989-6465a7eb0c3a/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-18 02:14:42,616 INFO [Time-limited test] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-18 02:14:42,743 WARN [Time-limited test] util.NativeCodeLoader(62): Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2023-07-18 02:14:43,219 DEBUG [Time-limited test] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-18 02:14:43,224 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d9e1427-d9ef-a78e-d989-6465a7eb0c3a/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-18 02:14:43,225 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d9e1427-d9ef-a78e-d989-6465a7eb0c3a/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-18 02:14:43,225 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d9e1427-d9ef-a78e-d989-6465a7eb0c3a/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-18 02:14:43,226 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d9e1427-d9ef-a78e-d989-6465a7eb0c3a/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-18 02:14:43,226 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d9e1427-d9ef-a78e-d989-6465a7eb0c3a/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-18 02:14:43,227 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d9e1427-d9ef-a78e-d989-6465a7eb0c3a/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-18 02:14:43,227 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d9e1427-d9ef-a78e-d989-6465a7eb0c3a/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-18 02:14:43,228 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d9e1427-d9ef-a78e-d989-6465a7eb0c3a/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-18 02:14:43,228 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d9e1427-d9ef-a78e-d989-6465a7eb0c3a/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-18 02:14:43,229 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d9e1427-d9ef-a78e-d989-6465a7eb0c3a/nfs.dump.dir in system properties and HBase conf 2023-07-18 02:14:43,229 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d9e1427-d9ef-a78e-d989-6465a7eb0c3a/java.io.tmpdir in system properties and HBase conf 2023-07-18 02:14:43,230 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d9e1427-d9ef-a78e-d989-6465a7eb0c3a/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-18 02:14:43,230 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d9e1427-d9ef-a78e-d989-6465a7eb0c3a/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-18 02:14:43,230 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d9e1427-d9ef-a78e-d989-6465a7eb0c3a/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-18 02:14:43,772 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-18 02:14:43,777 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-18 02:14:44,088 WARN [Time-limited test] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties 2023-07-18 02:14:44,267 INFO [Time-limited test] log.Slf4jLog(67): Logging to org.slf4j.impl.Reload4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2023-07-18 02:14:44,280 WARN [Time-limited test] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-18 02:14:44,321 INFO [Time-limited test] log.Slf4jLog(67): jetty-6.1.26 2023-07-18 02:14:44,361 INFO [Time-limited test] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d9e1427-d9ef-a78e-d989-6465a7eb0c3a/java.io.tmpdir/Jetty_localhost_33611_hdfs____.dku57k/webapp 2023-07-18 02:14:44,532 INFO [Time-limited test] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:33611 2023-07-18 02:14:44,544 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-18 02:14:44,544 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-18 02:14:45,044 WARN [Listener at localhost/45101] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-18 02:14:45,130 WARN [Listener at localhost/45101] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-18 02:14:45,150 WARN [Listener at localhost/45101] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-18 02:14:45,158 INFO [Listener at localhost/45101] log.Slf4jLog(67): jetty-6.1.26 2023-07-18 02:14:45,173 INFO [Listener at localhost/45101] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d9e1427-d9ef-a78e-d989-6465a7eb0c3a/java.io.tmpdir/Jetty_localhost_42673_datanode____1qb77f/webapp 2023-07-18 02:14:45,335 INFO [Listener at localhost/45101] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:42673 2023-07-18 02:14:45,729 WARN [Listener at localhost/41229] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-18 02:14:45,744 WARN [Listener at localhost/41229] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-18 02:14:45,747 WARN [Listener at localhost/41229] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-18 02:14:45,749 INFO [Listener at localhost/41229] log.Slf4jLog(67): jetty-6.1.26 2023-07-18 02:14:45,756 INFO [Listener at localhost/41229] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d9e1427-d9ef-a78e-d989-6465a7eb0c3a/java.io.tmpdir/Jetty_localhost_37057_datanode____.21yl2j/webapp 2023-07-18 02:14:45,874 INFO [Listener at localhost/41229] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:37057 2023-07-18 02:14:45,888 WARN [Listener at localhost/43195] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-18 02:14:45,902 WARN [Listener at localhost/43195] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-18 02:14:45,905 WARN [Listener at localhost/43195] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-18 02:14:45,906 INFO [Listener at localhost/43195] log.Slf4jLog(67): jetty-6.1.26 2023-07-18 02:14:45,910 INFO [Listener at localhost/43195] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d9e1427-d9ef-a78e-d989-6465a7eb0c3a/java.io.tmpdir/Jetty_localhost_43415_datanode____.g45hk0/webapp 2023-07-18 02:14:46,048 INFO [Listener at localhost/43195] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:43415 2023-07-18 02:14:46,100 WARN [Listener at localhost/38101] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-18 02:14:46,423 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x8f3b62ec0144d54b: Processing first storage report for DS-9de188ed-4aa0-40e3-be2d-fc8641659521 from datanode fcec732a-fdff-4880-8d24-29c30e97cc1b 2023-07-18 02:14:46,425 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x8f3b62ec0144d54b: from storage DS-9de188ed-4aa0-40e3-be2d-fc8641659521 node DatanodeRegistration(127.0.0.1:38365, datanodeUuid=fcec732a-fdff-4880-8d24-29c30e97cc1b, infoPort=40021, infoSecurePort=0, ipcPort=41229, storageInfo=lv=-57;cid=testClusterID;nsid=2101735056;c=1689646483854), blocks: 0, hasStaleStorage: true, processing time: 2 msecs, invalidatedBlocks: 0 2023-07-18 02:14:46,425 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xd9110803c884f8a9: Processing first storage report for DS-bef9494d-281f-4e87-b04c-fe86fdcfb4dc from datanode 9aecdbd1-b572-4706-a4d4-21916359a3ed 2023-07-18 02:14:46,426 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xd9110803c884f8a9: from storage DS-bef9494d-281f-4e87-b04c-fe86fdcfb4dc node DatanodeRegistration(127.0.0.1:33339, datanodeUuid=9aecdbd1-b572-4706-a4d4-21916359a3ed, infoPort=35671, infoSecurePort=0, ipcPort=38101, storageInfo=lv=-57;cid=testClusterID;nsid=2101735056;c=1689646483854), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-18 02:14:46,426 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x8f3b62ec0144d54b: Processing first storage report for DS-f9479ccc-4dfc-46c4-9981-044006e9bfb6 from datanode fcec732a-fdff-4880-8d24-29c30e97cc1b 2023-07-18 02:14:46,426 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x8f3b62ec0144d54b: from storage DS-f9479ccc-4dfc-46c4-9981-044006e9bfb6 node DatanodeRegistration(127.0.0.1:38365, datanodeUuid=fcec732a-fdff-4880-8d24-29c30e97cc1b, infoPort=40021, infoSecurePort=0, ipcPort=41229, storageInfo=lv=-57;cid=testClusterID;nsid=2101735056;c=1689646483854), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-18 02:14:46,426 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xd9110803c884f8a9: Processing first storage report for DS-cd0812b0-ab70-4123-9583-8718536974b4 from datanode 9aecdbd1-b572-4706-a4d4-21916359a3ed 2023-07-18 02:14:46,426 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xd9110803c884f8a9: from storage DS-cd0812b0-ab70-4123-9583-8718536974b4 node DatanodeRegistration(127.0.0.1:33339, datanodeUuid=9aecdbd1-b572-4706-a4d4-21916359a3ed, infoPort=35671, infoSecurePort=0, ipcPort=38101, storageInfo=lv=-57;cid=testClusterID;nsid=2101735056;c=1689646483854), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-18 02:14:46,427 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x947e379fc8d4f751: Processing first storage report for DS-afa6b23c-0172-447d-8546-c0b8f662d95b from datanode 23639624-f764-411b-b155-4a61e0a33cb4 2023-07-18 02:14:46,428 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x947e379fc8d4f751: from storage DS-afa6b23c-0172-447d-8546-c0b8f662d95b node DatanodeRegistration(127.0.0.1:34885, datanodeUuid=23639624-f764-411b-b155-4a61e0a33cb4, infoPort=41429, infoSecurePort=0, ipcPort=43195, storageInfo=lv=-57;cid=testClusterID;nsid=2101735056;c=1689646483854), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-18 02:14:46,428 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x947e379fc8d4f751: Processing first storage report for DS-dded97a9-2262-4553-bc59-c8ad5552bec1 from datanode 23639624-f764-411b-b155-4a61e0a33cb4 2023-07-18 02:14:46,428 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x947e379fc8d4f751: from storage DS-dded97a9-2262-4553-bc59-c8ad5552bec1 node DatanodeRegistration(127.0.0.1:34885, datanodeUuid=23639624-f764-411b-b155-4a61e0a33cb4, infoPort=41429, infoSecurePort=0, ipcPort=43195, storageInfo=lv=-57;cid=testClusterID;nsid=2101735056;c=1689646483854), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-18 02:14:46,578 DEBUG [Listener at localhost/38101] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d9e1427-d9ef-a78e-d989-6465a7eb0c3a 2023-07-18 02:14:46,652 INFO [Listener at localhost/38101] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d9e1427-d9ef-a78e-d989-6465a7eb0c3a/cluster_114a01d3-d950-74e3-9098-0eab13676d5a/zookeeper_0, clientPort=54439, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d9e1427-d9ef-a78e-d989-6465a7eb0c3a/cluster_114a01d3-d950-74e3-9098-0eab13676d5a/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d9e1427-d9ef-a78e-d989-6465a7eb0c3a/cluster_114a01d3-d950-74e3-9098-0eab13676d5a/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-18 02:14:46,666 INFO [Listener at localhost/38101] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=54439 2023-07-18 02:14:46,673 INFO [Listener at localhost/38101] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 02:14:46,675 INFO [Listener at localhost/38101] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 02:14:47,355 INFO [Listener at localhost/38101] util.FSUtils(471): Created version file at hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7 with version=8 2023-07-18 02:14:47,355 INFO [Listener at localhost/38101] hbase.HBaseTestingUtility(1406): Setting hbase.fs.tmp.dir to hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/hbase-staging 2023-07-18 02:14:47,363 DEBUG [Listener at localhost/38101] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-18 02:14:47,363 DEBUG [Listener at localhost/38101] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-18 02:14:47,363 DEBUG [Listener at localhost/38101] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-18 02:14:47,364 DEBUG [Listener at localhost/38101] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-18 02:14:47,726 INFO [Listener at localhost/38101] metrics.MetricRegistriesLoader(60): Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl 2023-07-18 02:14:48,280 INFO [Listener at localhost/38101] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-18 02:14:48,330 INFO [Listener at localhost/38101] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 02:14:48,331 INFO [Listener at localhost/38101] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-18 02:14:48,332 INFO [Listener at localhost/38101] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-18 02:14:48,332 INFO [Listener at localhost/38101] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 02:14:48,332 INFO [Listener at localhost/38101] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-18 02:14:48,514 INFO [Listener at localhost/38101] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-18 02:14:48,616 DEBUG [Listener at localhost/38101] util.ClassSize(228): Using Unsafe to estimate memory layout 2023-07-18 02:14:48,750 INFO [Listener at localhost/38101] ipc.NettyRpcServer(120): Bind to /172.31.14.131:40909 2023-07-18 02:14:48,767 INFO [Listener at localhost/38101] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 02:14:48,770 INFO [Listener at localhost/38101] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 02:14:48,801 INFO [Listener at localhost/38101] zookeeper.RecoverableZooKeeper(93): Process identifier=master:40909 connecting to ZooKeeper ensemble=127.0.0.1:54439 2023-07-18 02:14:48,861 DEBUG [Listener at localhost/38101-EventThread] zookeeper.ZKWatcher(600): master:409090x0, quorum=127.0.0.1:54439, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 02:14:48,875 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:40909-0x1017635d76e0000 connected 2023-07-18 02:14:48,902 DEBUG [Listener at localhost/38101] zookeeper.ZKUtil(164): master:40909-0x1017635d76e0000, quorum=127.0.0.1:54439, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-18 02:14:48,903 DEBUG [Listener at localhost/38101] zookeeper.ZKUtil(164): master:40909-0x1017635d76e0000, quorum=127.0.0.1:54439, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 02:14:48,907 DEBUG [Listener at localhost/38101] zookeeper.ZKUtil(164): master:40909-0x1017635d76e0000, quorum=127.0.0.1:54439, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-18 02:14:48,917 DEBUG [Listener at localhost/38101] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=40909 2023-07-18 02:14:48,917 DEBUG [Listener at localhost/38101] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=40909 2023-07-18 02:14:48,918 DEBUG [Listener at localhost/38101] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=40909 2023-07-18 02:14:48,918 DEBUG [Listener at localhost/38101] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=40909 2023-07-18 02:14:48,919 DEBUG [Listener at localhost/38101] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=40909 2023-07-18 02:14:48,960 INFO [Listener at localhost/38101] log.Log(170): Logging initialized @7172ms to org.apache.hbase.thirdparty.org.eclipse.jetty.util.log.Slf4jLog 2023-07-18 02:14:49,103 INFO [Listener at localhost/38101] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-18 02:14:49,103 INFO [Listener at localhost/38101] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-18 02:14:49,104 INFO [Listener at localhost/38101] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-18 02:14:49,106 INFO [Listener at localhost/38101] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-18 02:14:49,106 INFO [Listener at localhost/38101] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-18 02:14:49,106 INFO [Listener at localhost/38101] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-18 02:14:49,109 INFO [Listener at localhost/38101] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-18 02:14:49,172 INFO [Listener at localhost/38101] http.HttpServer(1146): Jetty bound to port 42641 2023-07-18 02:14:49,174 INFO [Listener at localhost/38101] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 02:14:49,204 INFO [Listener at localhost/38101] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 02:14:49,207 INFO [Listener at localhost/38101] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5c595dcd{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d9e1427-d9ef-a78e-d989-6465a7eb0c3a/hadoop.log.dir/,AVAILABLE} 2023-07-18 02:14:49,208 INFO [Listener at localhost/38101] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 02:14:49,208 INFO [Listener at localhost/38101] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1477abdb{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-18 02:14:49,390 INFO [Listener at localhost/38101] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-18 02:14:49,406 INFO [Listener at localhost/38101] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-18 02:14:49,406 INFO [Listener at localhost/38101] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-18 02:14:49,409 INFO [Listener at localhost/38101] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-18 02:14:49,417 INFO [Listener at localhost/38101] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 02:14:49,448 INFO [Listener at localhost/38101] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@7f562033{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d9e1427-d9ef-a78e-d989-6465a7eb0c3a/java.io.tmpdir/jetty-0_0_0_0-42641-hbase-server-2_4_18-SNAPSHOT_jar-_-any-4027427070489278928/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-18 02:14:49,463 INFO [Listener at localhost/38101] server.AbstractConnector(333): Started ServerConnector@1f3e883d{HTTP/1.1, (http/1.1)}{0.0.0.0:42641} 2023-07-18 02:14:49,463 INFO [Listener at localhost/38101] server.Server(415): Started @7675ms 2023-07-18 02:14:49,468 INFO [Listener at localhost/38101] master.HMaster(444): hbase.rootdir=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7, hbase.cluster.distributed=false 2023-07-18 02:14:49,556 INFO [Listener at localhost/38101] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-18 02:14:49,556 INFO [Listener at localhost/38101] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 02:14:49,556 INFO [Listener at localhost/38101] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-18 02:14:49,557 INFO [Listener at localhost/38101] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-18 02:14:49,557 INFO [Listener at localhost/38101] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 02:14:49,557 INFO [Listener at localhost/38101] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-18 02:14:49,563 INFO [Listener at localhost/38101] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-18 02:14:49,567 INFO [Listener at localhost/38101] ipc.NettyRpcServer(120): Bind to /172.31.14.131:45077 2023-07-18 02:14:49,571 INFO [Listener at localhost/38101] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-18 02:14:49,581 DEBUG [Listener at localhost/38101] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-18 02:14:49,583 INFO [Listener at localhost/38101] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 02:14:49,585 INFO [Listener at localhost/38101] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 02:14:49,587 INFO [Listener at localhost/38101] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:45077 connecting to ZooKeeper ensemble=127.0.0.1:54439 2023-07-18 02:14:49,596 DEBUG [Listener at localhost/38101-EventThread] zookeeper.ZKWatcher(600): regionserver:450770x0, quorum=127.0.0.1:54439, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 02:14:49,598 DEBUG [Listener at localhost/38101] zookeeper.ZKUtil(164): regionserver:450770x0, quorum=127.0.0.1:54439, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-18 02:14:49,599 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:45077-0x1017635d76e0001 connected 2023-07-18 02:14:49,600 DEBUG [Listener at localhost/38101] zookeeper.ZKUtil(164): regionserver:45077-0x1017635d76e0001, quorum=127.0.0.1:54439, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 02:14:49,602 DEBUG [Listener at localhost/38101] zookeeper.ZKUtil(164): regionserver:45077-0x1017635d76e0001, quorum=127.0.0.1:54439, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-18 02:14:49,606 DEBUG [Listener at localhost/38101] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=45077 2023-07-18 02:14:49,607 DEBUG [Listener at localhost/38101] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=45077 2023-07-18 02:14:49,607 DEBUG [Listener at localhost/38101] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=45077 2023-07-18 02:14:49,608 DEBUG [Listener at localhost/38101] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=45077 2023-07-18 02:14:49,610 DEBUG [Listener at localhost/38101] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=45077 2023-07-18 02:14:49,614 INFO [Listener at localhost/38101] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-18 02:14:49,614 INFO [Listener at localhost/38101] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-18 02:14:49,614 INFO [Listener at localhost/38101] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-18 02:14:49,616 INFO [Listener at localhost/38101] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-18 02:14:49,617 INFO [Listener at localhost/38101] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-18 02:14:49,617 INFO [Listener at localhost/38101] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-18 02:14:49,617 INFO [Listener at localhost/38101] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-18 02:14:49,619 INFO [Listener at localhost/38101] http.HttpServer(1146): Jetty bound to port 41637 2023-07-18 02:14:49,619 INFO [Listener at localhost/38101] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 02:14:49,628 INFO [Listener at localhost/38101] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 02:14:49,628 INFO [Listener at localhost/38101] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1c6f9d30{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d9e1427-d9ef-a78e-d989-6465a7eb0c3a/hadoop.log.dir/,AVAILABLE} 2023-07-18 02:14:49,629 INFO [Listener at localhost/38101] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 02:14:49,629 INFO [Listener at localhost/38101] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5c80b18{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-18 02:14:49,783 INFO [Listener at localhost/38101] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-18 02:14:49,785 INFO [Listener at localhost/38101] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-18 02:14:49,786 INFO [Listener at localhost/38101] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-18 02:14:49,786 INFO [Listener at localhost/38101] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-18 02:14:49,787 INFO [Listener at localhost/38101] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 02:14:49,791 INFO [Listener at localhost/38101] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@40f2000c{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d9e1427-d9ef-a78e-d989-6465a7eb0c3a/java.io.tmpdir/jetty-0_0_0_0-41637-hbase-server-2_4_18-SNAPSHOT_jar-_-any-2220605354320390771/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-18 02:14:49,793 INFO [Listener at localhost/38101] server.AbstractConnector(333): Started ServerConnector@22a36f37{HTTP/1.1, (http/1.1)}{0.0.0.0:41637} 2023-07-18 02:14:49,793 INFO [Listener at localhost/38101] server.Server(415): Started @8004ms 2023-07-18 02:14:49,809 INFO [Listener at localhost/38101] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-18 02:14:49,809 INFO [Listener at localhost/38101] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 02:14:49,809 INFO [Listener at localhost/38101] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-18 02:14:49,810 INFO [Listener at localhost/38101] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-18 02:14:49,810 INFO [Listener at localhost/38101] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 02:14:49,811 INFO [Listener at localhost/38101] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-18 02:14:49,811 INFO [Listener at localhost/38101] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-18 02:14:49,813 INFO [Listener at localhost/38101] ipc.NettyRpcServer(120): Bind to /172.31.14.131:35063 2023-07-18 02:14:49,813 INFO [Listener at localhost/38101] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-18 02:14:49,819 DEBUG [Listener at localhost/38101] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-18 02:14:49,820 INFO [Listener at localhost/38101] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 02:14:49,821 INFO [Listener at localhost/38101] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 02:14:49,823 INFO [Listener at localhost/38101] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:35063 connecting to ZooKeeper ensemble=127.0.0.1:54439 2023-07-18 02:14:49,829 DEBUG [Listener at localhost/38101-EventThread] zookeeper.ZKWatcher(600): regionserver:350630x0, quorum=127.0.0.1:54439, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 02:14:49,830 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:35063-0x1017635d76e0002 connected 2023-07-18 02:14:49,831 DEBUG [Listener at localhost/38101] zookeeper.ZKUtil(164): regionserver:35063-0x1017635d76e0002, quorum=127.0.0.1:54439, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-18 02:14:49,831 DEBUG [Listener at localhost/38101] zookeeper.ZKUtil(164): regionserver:35063-0x1017635d76e0002, quorum=127.0.0.1:54439, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 02:14:49,832 DEBUG [Listener at localhost/38101] zookeeper.ZKUtil(164): regionserver:35063-0x1017635d76e0002, quorum=127.0.0.1:54439, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-18 02:14:49,838 DEBUG [Listener at localhost/38101] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=35063 2023-07-18 02:14:49,839 DEBUG [Listener at localhost/38101] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=35063 2023-07-18 02:14:49,847 DEBUG [Listener at localhost/38101] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=35063 2023-07-18 02:14:49,848 DEBUG [Listener at localhost/38101] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=35063 2023-07-18 02:14:49,849 DEBUG [Listener at localhost/38101] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=35063 2023-07-18 02:14:49,852 INFO [Listener at localhost/38101] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-18 02:14:49,852 INFO [Listener at localhost/38101] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-18 02:14:49,853 INFO [Listener at localhost/38101] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-18 02:14:49,853 INFO [Listener at localhost/38101] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-18 02:14:49,854 INFO [Listener at localhost/38101] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-18 02:14:49,854 INFO [Listener at localhost/38101] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-18 02:14:49,854 INFO [Listener at localhost/38101] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-18 02:14:49,855 INFO [Listener at localhost/38101] http.HttpServer(1146): Jetty bound to port 34059 2023-07-18 02:14:49,855 INFO [Listener at localhost/38101] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 02:14:49,859 INFO [Listener at localhost/38101] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 02:14:49,859 INFO [Listener at localhost/38101] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@64f476b1{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d9e1427-d9ef-a78e-d989-6465a7eb0c3a/hadoop.log.dir/,AVAILABLE} 2023-07-18 02:14:49,859 INFO [Listener at localhost/38101] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 02:14:49,860 INFO [Listener at localhost/38101] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@46770358{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-18 02:14:49,978 INFO [Listener at localhost/38101] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-18 02:14:49,980 INFO [Listener at localhost/38101] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-18 02:14:49,980 INFO [Listener at localhost/38101] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-18 02:14:49,980 INFO [Listener at localhost/38101] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-18 02:14:49,982 INFO [Listener at localhost/38101] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 02:14:49,983 INFO [Listener at localhost/38101] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@67816748{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d9e1427-d9ef-a78e-d989-6465a7eb0c3a/java.io.tmpdir/jetty-0_0_0_0-34059-hbase-server-2_4_18-SNAPSHOT_jar-_-any-7913912735610840176/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-18 02:14:49,985 INFO [Listener at localhost/38101] server.AbstractConnector(333): Started ServerConnector@1ba8dae2{HTTP/1.1, (http/1.1)}{0.0.0.0:34059} 2023-07-18 02:14:49,985 INFO [Listener at localhost/38101] server.Server(415): Started @8196ms 2023-07-18 02:14:49,999 INFO [Listener at localhost/38101] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-18 02:14:49,999 INFO [Listener at localhost/38101] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 02:14:49,999 INFO [Listener at localhost/38101] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-18 02:14:49,999 INFO [Listener at localhost/38101] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-18 02:14:50,000 INFO [Listener at localhost/38101] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 02:14:50,000 INFO [Listener at localhost/38101] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-18 02:14:50,000 INFO [Listener at localhost/38101] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-18 02:14:50,002 INFO [Listener at localhost/38101] ipc.NettyRpcServer(120): Bind to /172.31.14.131:39557 2023-07-18 02:14:50,002 INFO [Listener at localhost/38101] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-18 02:14:50,005 DEBUG [Listener at localhost/38101] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-18 02:14:50,007 INFO [Listener at localhost/38101] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 02:14:50,009 INFO [Listener at localhost/38101] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 02:14:50,011 INFO [Listener at localhost/38101] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:39557 connecting to ZooKeeper ensemble=127.0.0.1:54439 2023-07-18 02:14:50,022 DEBUG [Listener at localhost/38101-EventThread] zookeeper.ZKWatcher(600): regionserver:395570x0, quorum=127.0.0.1:54439, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 02:14:50,023 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:39557-0x1017635d76e0003 connected 2023-07-18 02:14:50,023 DEBUG [Listener at localhost/38101] zookeeper.ZKUtil(164): regionserver:39557-0x1017635d76e0003, quorum=127.0.0.1:54439, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-18 02:14:50,024 DEBUG [Listener at localhost/38101] zookeeper.ZKUtil(164): regionserver:39557-0x1017635d76e0003, quorum=127.0.0.1:54439, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 02:14:50,025 DEBUG [Listener at localhost/38101] zookeeper.ZKUtil(164): regionserver:39557-0x1017635d76e0003, quorum=127.0.0.1:54439, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-18 02:14:50,025 DEBUG [Listener at localhost/38101] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=39557 2023-07-18 02:14:50,026 DEBUG [Listener at localhost/38101] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=39557 2023-07-18 02:14:50,026 DEBUG [Listener at localhost/38101] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=39557 2023-07-18 02:14:50,027 DEBUG [Listener at localhost/38101] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=39557 2023-07-18 02:14:50,027 DEBUG [Listener at localhost/38101] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=39557 2023-07-18 02:14:50,030 INFO [Listener at localhost/38101] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-18 02:14:50,030 INFO [Listener at localhost/38101] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-18 02:14:50,031 INFO [Listener at localhost/38101] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-18 02:14:50,031 INFO [Listener at localhost/38101] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-18 02:14:50,031 INFO [Listener at localhost/38101] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-18 02:14:50,031 INFO [Listener at localhost/38101] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-18 02:14:50,035 INFO [Listener at localhost/38101] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-18 02:14:50,036 INFO [Listener at localhost/38101] http.HttpServer(1146): Jetty bound to port 43871 2023-07-18 02:14:50,036 INFO [Listener at localhost/38101] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 02:14:50,044 INFO [Listener at localhost/38101] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 02:14:50,044 INFO [Listener at localhost/38101] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@145f3cb8{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d9e1427-d9ef-a78e-d989-6465a7eb0c3a/hadoop.log.dir/,AVAILABLE} 2023-07-18 02:14:50,045 INFO [Listener at localhost/38101] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 02:14:50,045 INFO [Listener at localhost/38101] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7431440f{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-18 02:14:50,176 INFO [Listener at localhost/38101] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-18 02:14:50,177 INFO [Listener at localhost/38101] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-18 02:14:50,178 INFO [Listener at localhost/38101] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-18 02:14:50,178 INFO [Listener at localhost/38101] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-18 02:14:50,180 INFO [Listener at localhost/38101] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 02:14:50,181 INFO [Listener at localhost/38101] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@38a30dd7{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d9e1427-d9ef-a78e-d989-6465a7eb0c3a/java.io.tmpdir/jetty-0_0_0_0-43871-hbase-server-2_4_18-SNAPSHOT_jar-_-any-8389453079023739763/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-18 02:14:50,182 INFO [Listener at localhost/38101] server.AbstractConnector(333): Started ServerConnector@16240f3c{HTTP/1.1, (http/1.1)}{0.0.0.0:43871} 2023-07-18 02:14:50,183 INFO [Listener at localhost/38101] server.Server(415): Started @8394ms 2023-07-18 02:14:50,191 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 02:14:50,203 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@2938e868{HTTP/1.1, (http/1.1)}{0.0.0.0:38141} 2023-07-18 02:14:50,203 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @8415ms 2023-07-18 02:14:50,203 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,40909,1689646487536 2023-07-18 02:14:50,215 DEBUG [Listener at localhost/38101-EventThread] zookeeper.ZKWatcher(600): master:40909-0x1017635d76e0000, quorum=127.0.0.1:54439, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-18 02:14:50,217 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:40909-0x1017635d76e0000, quorum=127.0.0.1:54439, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,40909,1689646487536 2023-07-18 02:14:50,246 DEBUG [Listener at localhost/38101-EventThread] zookeeper.ZKWatcher(600): regionserver:45077-0x1017635d76e0001, quorum=127.0.0.1:54439, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-18 02:14:50,246 DEBUG [Listener at localhost/38101-EventThread] zookeeper.ZKWatcher(600): regionserver:35063-0x1017635d76e0002, quorum=127.0.0.1:54439, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-18 02:14:50,246 DEBUG [Listener at localhost/38101-EventThread] zookeeper.ZKWatcher(600): regionserver:39557-0x1017635d76e0003, quorum=127.0.0.1:54439, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-18 02:14:50,247 DEBUG [Listener at localhost/38101-EventThread] zookeeper.ZKWatcher(600): master:40909-0x1017635d76e0000, quorum=127.0.0.1:54439, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-18 02:14:50,248 DEBUG [Listener at localhost/38101-EventThread] zookeeper.ZKWatcher(600): master:40909-0x1017635d76e0000, quorum=127.0.0.1:54439, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 02:14:50,249 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:40909-0x1017635d76e0000, quorum=127.0.0.1:54439, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-18 02:14:50,251 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,40909,1689646487536 from backup master directory 2023-07-18 02:14:50,251 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:40909-0x1017635d76e0000, quorum=127.0.0.1:54439, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-18 02:14:50,256 DEBUG [Listener at localhost/38101-EventThread] zookeeper.ZKWatcher(600): master:40909-0x1017635d76e0000, quorum=127.0.0.1:54439, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,40909,1689646487536 2023-07-18 02:14:50,256 DEBUG [Listener at localhost/38101-EventThread] zookeeper.ZKWatcher(600): master:40909-0x1017635d76e0000, quorum=127.0.0.1:54439, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-18 02:14:50,257 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-18 02:14:50,257 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,40909,1689646487536 2023-07-18 02:14:50,262 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating data MemStoreChunkPool with chunk size 2 MB, max count 352, initial count 0 2023-07-18 02:14:50,264 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating index MemStoreChunkPool with chunk size 204.80 KB, max count 391, initial count 0 2023-07-18 02:14:50,402 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/hbase.id with ID: 6a927052-2b6c-47ef-86d7-463ca10625a2 2023-07-18 02:14:50,445 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 02:14:50,463 DEBUG [Listener at localhost/38101-EventThread] zookeeper.ZKWatcher(600): master:40909-0x1017635d76e0000, quorum=127.0.0.1:54439, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 02:14:50,520 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x2e3a8222 to 127.0.0.1:54439 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 02:14:50,550 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1d56f3e0, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 02:14:50,584 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 02:14:50,586 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-18 02:14:50,612 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(264): ClientProtocol::create wrong number of arguments, should be hadoop 3.2 or below 2023-07-18 02:14:50,612 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(270): ClientProtocol::create wrong number of arguments, should be hadoop 2.x 2023-07-18 02:14:50,615 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(279): can not find SHOULD_REPLICATE flag, should be hadoop 2.x java.lang.IllegalArgumentException: No enum constant org.apache.hadoop.fs.CreateFlag.SHOULD_REPLICATE at java.lang.Enum.valueOf(Enum.java:238) at org.apache.hadoop.fs.CreateFlag.valueOf(CreateFlag.java:63) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.loadShouldReplicateFlag(FanOutOneBlockAsyncDFSOutputHelper.java:277) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.(FanOutOneBlockAsyncDFSOutputHelper.java:304) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:139) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-18 02:14:50,620 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(243): No decryptEncryptedDataEncryptionKey method in DFSClient, should be hadoop version with HDFS-12396 java.lang.NoSuchMethodException: org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(org.apache.hadoop.fs.FileEncryptionInfo) at java.lang.Class.getDeclaredMethod(Class.java:2130) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelperWithoutHDFS12396(FanOutOneBlockAsyncDFSOutputSaslHelper.java:182) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelper(FanOutOneBlockAsyncDFSOutputSaslHelper.java:241) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.(FanOutOneBlockAsyncDFSOutputSaslHelper.java:252) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:140) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-18 02:14:50,622 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 02:14:50,669 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/MasterData/data/master/store-tmp 2023-07-18 02:14:50,723 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 02:14:50,724 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-18 02:14:50,724 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 02:14:50,724 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 02:14:50,724 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-18 02:14:50,724 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 02:14:50,724 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 02:14:50,724 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-18 02:14:50,726 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/MasterData/WALs/jenkins-hbase4.apache.org,40909,1689646487536 2023-07-18 02:14:50,749 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C40909%2C1689646487536, suffix=, logDir=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/MasterData/WALs/jenkins-hbase4.apache.org,40909,1689646487536, archiveDir=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/MasterData/oldWALs, maxLogs=10 2023-07-18 02:14:50,822 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34885,DS-afa6b23c-0172-447d-8546-c0b8f662d95b,DISK] 2023-07-18 02:14:50,822 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38365,DS-9de188ed-4aa0-40e3-be2d-fc8641659521,DISK] 2023-07-18 02:14:50,822 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33339,DS-bef9494d-281f-4e87-b04c-fe86fdcfb4dc,DISK] 2023-07-18 02:14:50,833 DEBUG [RS-EventLoopGroup-5-3] asyncfs.ProtobufDecoder(123): Hadoop 3.2 and below use unshaded protobuf. java.lang.ClassNotFoundException: org.apache.hadoop.thirdparty.protobuf.MessageLite at java.net.URLClassLoader.findClass(URLClassLoader.java:387) at java.lang.ClassLoader.loadClass(ClassLoader.java:418) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) at java.lang.ClassLoader.loadClass(ClassLoader.java:351) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.io.asyncfs.ProtobufDecoder.(ProtobufDecoder.java:118) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.processWriteBlockResponse(FanOutOneBlockAsyncDFSOutputHelper.java:340) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$100(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$4.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:424) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:557) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.addListener(DefaultPromise.java:185) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.initialize(FanOutOneBlockAsyncDFSOutputHelper.java:418) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$300(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:476) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:471) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:583) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:559) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:636) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:625) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:105) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.fulfillConnectPromise(AbstractEpollChannel.java:653) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:691) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-18 02:14:50,919 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/MasterData/WALs/jenkins-hbase4.apache.org,40909,1689646487536/jenkins-hbase4.apache.org%2C40909%2C1689646487536.1689646490762 2023-07-18 02:14:50,927 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33339,DS-bef9494d-281f-4e87-b04c-fe86fdcfb4dc,DISK], DatanodeInfoWithStorage[127.0.0.1:34885,DS-afa6b23c-0172-447d-8546-c0b8f662d95b,DISK], DatanodeInfoWithStorage[127.0.0.1:38365,DS-9de188ed-4aa0-40e3-be2d-fc8641659521,DISK]] 2023-07-18 02:14:50,928 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-18 02:14:50,928 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 02:14:50,932 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-18 02:14:50,934 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-18 02:14:51,056 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-18 02:14:51,064 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-18 02:14:51,107 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-18 02:14:51,121 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 02:14:51,128 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-18 02:14:51,132 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-18 02:14:51,152 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-18 02:14:51,160 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 02:14:51,163 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9694089600, jitterRate=-0.09716755151748657}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 02:14:51,163 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-18 02:14:51,169 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-18 02:14:51,202 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-18 02:14:51,202 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-18 02:14:51,207 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-18 02:14:51,209 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 1 msec 2023-07-18 02:14:51,262 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 52 msec 2023-07-18 02:14:51,262 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-18 02:14:51,295 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-18 02:14:51,302 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-18 02:14:51,310 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40909-0x1017635d76e0000, quorum=127.0.0.1:54439, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-18 02:14:51,318 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-18 02:14:51,325 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40909-0x1017635d76e0000, quorum=127.0.0.1:54439, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-18 02:14:51,328 DEBUG [Listener at localhost/38101-EventThread] zookeeper.ZKWatcher(600): master:40909-0x1017635d76e0000, quorum=127.0.0.1:54439, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 02:14:51,329 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40909-0x1017635d76e0000, quorum=127.0.0.1:54439, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-18 02:14:51,331 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40909-0x1017635d76e0000, quorum=127.0.0.1:54439, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-18 02:14:51,347 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40909-0x1017635d76e0000, quorum=127.0.0.1:54439, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-18 02:14:51,353 DEBUG [Listener at localhost/38101-EventThread] zookeeper.ZKWatcher(600): regionserver:45077-0x1017635d76e0001, quorum=127.0.0.1:54439, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-18 02:14:51,353 DEBUG [Listener at localhost/38101-EventThread] zookeeper.ZKWatcher(600): regionserver:39557-0x1017635d76e0003, quorum=127.0.0.1:54439, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-18 02:14:51,353 DEBUG [Listener at localhost/38101-EventThread] zookeeper.ZKWatcher(600): master:40909-0x1017635d76e0000, quorum=127.0.0.1:54439, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-18 02:14:51,353 DEBUG [Listener at localhost/38101-EventThread] zookeeper.ZKWatcher(600): master:40909-0x1017635d76e0000, quorum=127.0.0.1:54439, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 02:14:51,353 DEBUG [Listener at localhost/38101-EventThread] zookeeper.ZKWatcher(600): regionserver:35063-0x1017635d76e0002, quorum=127.0.0.1:54439, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-18 02:14:51,354 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,40909,1689646487536, sessionid=0x1017635d76e0000, setting cluster-up flag (Was=false) 2023-07-18 02:14:51,381 DEBUG [Listener at localhost/38101-EventThread] zookeeper.ZKWatcher(600): master:40909-0x1017635d76e0000, quorum=127.0.0.1:54439, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 02:14:51,390 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-18 02:14:51,392 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,40909,1689646487536 2023-07-18 02:14:51,398 DEBUG [Listener at localhost/38101-EventThread] zookeeper.ZKWatcher(600): master:40909-0x1017635d76e0000, quorum=127.0.0.1:54439, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 02:14:51,405 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-18 02:14:51,406 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,40909,1689646487536 2023-07-18 02:14:51,409 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.hbase-snapshot/.tmp 2023-07-18 02:14:51,490 INFO [RS:1;jenkins-hbase4:35063] regionserver.HRegionServer(951): ClusterId : 6a927052-2b6c-47ef-86d7-463ca10625a2 2023-07-18 02:14:51,491 INFO [RS:0;jenkins-hbase4:45077] regionserver.HRegionServer(951): ClusterId : 6a927052-2b6c-47ef-86d7-463ca10625a2 2023-07-18 02:14:51,490 INFO [RS:2;jenkins-hbase4:39557] regionserver.HRegionServer(951): ClusterId : 6a927052-2b6c-47ef-86d7-463ca10625a2 2023-07-18 02:14:51,493 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-18 02:14:51,497 DEBUG [RS:1;jenkins-hbase4:35063] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-18 02:14:51,497 DEBUG [RS:2;jenkins-hbase4:39557] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-18 02:14:51,497 DEBUG [RS:0;jenkins-hbase4:45077] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-18 02:14:51,503 DEBUG [RS:2;jenkins-hbase4:39557] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-18 02:14:51,503 DEBUG [RS:0;jenkins-hbase4:45077] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-18 02:14:51,503 DEBUG [RS:1;jenkins-hbase4:35063] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-18 02:14:51,503 DEBUG [RS:0;jenkins-hbase4:45077] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-18 02:14:51,503 DEBUG [RS:2;jenkins-hbase4:39557] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-18 02:14:51,503 DEBUG [RS:1;jenkins-hbase4:35063] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-18 02:14:51,505 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-18 02:14:51,513 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40909,1689646487536] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-18 02:14:51,513 DEBUG [RS:1;jenkins-hbase4:35063] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-18 02:14:51,513 DEBUG [RS:2;jenkins-hbase4:39557] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-18 02:14:51,513 DEBUG [RS:0;jenkins-hbase4:45077] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-18 02:14:51,517 DEBUG [RS:1;jenkins-hbase4:35063] zookeeper.ReadOnlyZKClient(139): Connect 0x08692c10 to 127.0.0.1:54439 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 02:14:51,517 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-18 02:14:51,517 DEBUG [RS:0;jenkins-hbase4:45077] zookeeper.ReadOnlyZKClient(139): Connect 0x54420ef1 to 127.0.0.1:54439 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 02:14:51,517 DEBUG [RS:2;jenkins-hbase4:39557] zookeeper.ReadOnlyZKClient(139): Connect 0x1e6a0baf to 127.0.0.1:54439 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 02:14:51,518 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-18 02:14:51,532 DEBUG [RS:1;jenkins-hbase4:35063] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@19f2f5a4, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 02:14:51,533 DEBUG [RS:1;jenkins-hbase4:35063] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@68f8c4bf, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-18 02:14:51,534 DEBUG [RS:0;jenkins-hbase4:45077] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2794a62f, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 02:14:51,534 DEBUG [RS:0;jenkins-hbase4:45077] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@485519f7, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-18 02:14:51,536 DEBUG [RS:2;jenkins-hbase4:39557] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@10ade612, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 02:14:51,537 DEBUG [RS:2;jenkins-hbase4:39557] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5190ec29, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-18 02:14:51,585 DEBUG [RS:1;jenkins-hbase4:35063] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:35063 2023-07-18 02:14:51,585 DEBUG [RS:2;jenkins-hbase4:39557] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:39557 2023-07-18 02:14:51,591 DEBUG [RS:0;jenkins-hbase4:45077] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:45077 2023-07-18 02:14:51,591 INFO [RS:2;jenkins-hbase4:39557] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-18 02:14:51,597 INFO [RS:2;jenkins-hbase4:39557] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-18 02:14:51,597 DEBUG [RS:2;jenkins-hbase4:39557] regionserver.HRegionServer(1022): About to register with Master. 2023-07-18 02:14:51,591 INFO [RS:1;jenkins-hbase4:35063] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-18 02:14:51,600 INFO [RS:1;jenkins-hbase4:35063] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-18 02:14:51,600 DEBUG [RS:1;jenkins-hbase4:35063] regionserver.HRegionServer(1022): About to register with Master. 2023-07-18 02:14:51,591 INFO [RS:0;jenkins-hbase4:45077] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-18 02:14:51,600 INFO [RS:0;jenkins-hbase4:45077] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-18 02:14:51,600 DEBUG [RS:0;jenkins-hbase4:45077] regionserver.HRegionServer(1022): About to register with Master. 2023-07-18 02:14:51,602 INFO [RS:2;jenkins-hbase4:39557] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,40909,1689646487536 with isa=jenkins-hbase4.apache.org/172.31.14.131:39557, startcode=1689646489998 2023-07-18 02:14:51,602 INFO [RS:1;jenkins-hbase4:35063] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,40909,1689646487536 with isa=jenkins-hbase4.apache.org/172.31.14.131:35063, startcode=1689646489808 2023-07-18 02:14:51,606 INFO [RS:0;jenkins-hbase4:45077] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,40909,1689646487536 with isa=jenkins-hbase4.apache.org/172.31.14.131:45077, startcode=1689646489555 2023-07-18 02:14:51,630 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-18 02:14:51,632 DEBUG [RS:0;jenkins-hbase4:45077] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-18 02:14:51,632 DEBUG [RS:2;jenkins-hbase4:39557] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-18 02:14:51,632 DEBUG [RS:1;jenkins-hbase4:35063] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-18 02:14:51,731 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-18 02:14:51,734 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:34289, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.2 (auth:SIMPLE), service=RegionServerStatusService 2023-07-18 02:14:51,734 INFO [RS-EventLoopGroup-1-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:44269, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.1 (auth:SIMPLE), service=RegionServerStatusService 2023-07-18 02:14:51,734 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:43009, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.0 (auth:SIMPLE), service=RegionServerStatusService 2023-07-18 02:14:51,738 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-18 02:14:51,738 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-18 02:14:51,739 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-18 02:14:51,740 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-18 02:14:51,740 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-18 02:14:51,741 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-18 02:14:51,741 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-18 02:14:51,741 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-18 02:14:51,742 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:14:51,742 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-18 02:14:51,742 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:14:51,745 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=40909] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 02:14:51,750 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689646521750 2023-07-18 02:14:51,752 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-18 02:14:51,758 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-18 02:14:51,761 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=40909] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 02:14:51,762 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=40909] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 02:14:51,765 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-18 02:14:51,766 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-18 02:14:51,772 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-18 02:14:51,773 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-18 02:14:51,773 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-18 02:14:51,774 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-18 02:14:51,774 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-18 02:14:51,775 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-18 02:14:51,778 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-18 02:14:51,780 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-18 02:14:51,781 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-18 02:14:51,786 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-18 02:14:51,786 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-18 02:14:51,788 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689646491788,5,FailOnTimeoutGroup] 2023-07-18 02:14:51,789 DEBUG [RS:1;jenkins-hbase4:35063] regionserver.HRegionServer(2830): Master is not running yet 2023-07-18 02:14:51,789 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689646491789,5,FailOnTimeoutGroup] 2023-07-18 02:14:51,790 WARN [RS:1;jenkins-hbase4:35063] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-18 02:14:51,789 DEBUG [RS:0;jenkins-hbase4:45077] regionserver.HRegionServer(2830): Master is not running yet 2023-07-18 02:14:51,789 DEBUG [RS:2;jenkins-hbase4:39557] regionserver.HRegionServer(2830): Master is not running yet 2023-07-18 02:14:51,790 WARN [RS:0;jenkins-hbase4:45077] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-18 02:14:51,790 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-18 02:14:51,790 WARN [RS:2;jenkins-hbase4:39557] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-18 02:14:51,790 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-18 02:14:51,792 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-18 02:14:51,792 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-18 02:14:51,833 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-18 02:14:51,835 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-18 02:14:51,835 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7 2023-07-18 02:14:51,868 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 02:14:51,872 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-18 02:14:51,875 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/meta/1588230740/info 2023-07-18 02:14:51,876 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-18 02:14:51,877 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 02:14:51,877 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-18 02:14:51,880 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/meta/1588230740/rep_barrier 2023-07-18 02:14:51,881 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-18 02:14:51,882 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 02:14:51,882 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-18 02:14:51,884 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/meta/1588230740/table 2023-07-18 02:14:51,884 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-18 02:14:51,886 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 02:14:51,888 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/meta/1588230740 2023-07-18 02:14:51,891 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/meta/1588230740 2023-07-18 02:14:51,891 INFO [RS:1;jenkins-hbase4:35063] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,40909,1689646487536 with isa=jenkins-hbase4.apache.org/172.31.14.131:35063, startcode=1689646489808 2023-07-18 02:14:51,891 INFO [RS:0;jenkins-hbase4:45077] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,40909,1689646487536 with isa=jenkins-hbase4.apache.org/172.31.14.131:45077, startcode=1689646489555 2023-07-18 02:14:51,895 INFO [RS:2;jenkins-hbase4:39557] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,40909,1689646487536 with isa=jenkins-hbase4.apache.org/172.31.14.131:39557, startcode=1689646489998 2023-07-18 02:14:51,899 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=40909] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,35063,1689646489808 2023-07-18 02:14:51,901 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40909,1689646487536] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-18 02:14:51,901 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40909,1689646487536] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-18 02:14:51,905 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-18 02:14:51,914 DEBUG [RS:1;jenkins-hbase4:35063] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7 2023-07-18 02:14:51,914 DEBUG [RS:1;jenkins-hbase4:35063] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:45101 2023-07-18 02:14:51,914 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=40909] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,45077,1689646489555 2023-07-18 02:14:51,914 DEBUG [RS:1;jenkins-hbase4:35063] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=42641 2023-07-18 02:14:51,915 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-18 02:14:51,915 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40909,1689646487536] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-18 02:14:51,915 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40909,1689646487536] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-18 02:14:51,918 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=40909] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,39557,1689646489998 2023-07-18 02:14:51,918 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40909,1689646487536] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-18 02:14:51,918 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40909,1689646487536] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-18 02:14:51,919 DEBUG [RS:0;jenkins-hbase4:45077] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7 2023-07-18 02:14:51,919 DEBUG [RS:0;jenkins-hbase4:45077] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:45101 2023-07-18 02:14:51,923 DEBUG [RS:2;jenkins-hbase4:39557] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7 2023-07-18 02:14:51,924 DEBUG [RS:0;jenkins-hbase4:45077] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=42641 2023-07-18 02:14:51,924 DEBUG [RS:2;jenkins-hbase4:39557] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:45101 2023-07-18 02:14:51,924 DEBUG [RS:2;jenkins-hbase4:39557] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=42641 2023-07-18 02:14:51,925 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 02:14:51,926 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10690029280, jitterRate=-0.004413440823554993}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-18 02:14:51,926 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-18 02:14:51,926 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-18 02:14:51,926 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-18 02:14:51,926 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-18 02:14:51,926 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-18 02:14:51,926 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-18 02:14:51,928 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-18 02:14:51,928 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-18 02:14:51,932 DEBUG [Listener at localhost/38101-EventThread] zookeeper.ZKWatcher(600): master:40909-0x1017635d76e0000, quorum=127.0.0.1:54439, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 02:14:51,933 DEBUG [RS:2;jenkins-hbase4:39557] zookeeper.ZKUtil(162): regionserver:39557-0x1017635d76e0003, quorum=127.0.0.1:54439, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39557,1689646489998 2023-07-18 02:14:51,934 WARN [RS:2;jenkins-hbase4:39557] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-18 02:14:51,934 INFO [RS:2;jenkins-hbase4:39557] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 02:14:51,934 DEBUG [RS:2;jenkins-hbase4:39557] regionserver.HRegionServer(1948): logDir=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/WALs/jenkins-hbase4.apache.org,39557,1689646489998 2023-07-18 02:14:51,934 DEBUG [RS:0;jenkins-hbase4:45077] zookeeper.ZKUtil(162): regionserver:45077-0x1017635d76e0001, quorum=127.0.0.1:54439, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45077,1689646489555 2023-07-18 02:14:51,934 WARN [RS:0;jenkins-hbase4:45077] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-18 02:14:51,936 INFO [RS:0;jenkins-hbase4:45077] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 02:14:51,938 DEBUG [RS:0;jenkins-hbase4:45077] regionserver.HRegionServer(1948): logDir=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/WALs/jenkins-hbase4.apache.org,45077,1689646489555 2023-07-18 02:14:51,939 DEBUG [RS:1;jenkins-hbase4:35063] zookeeper.ZKUtil(162): regionserver:35063-0x1017635d76e0002, quorum=127.0.0.1:54439, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35063,1689646489808 2023-07-18 02:14:51,939 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-18 02:14:51,939 WARN [RS:1;jenkins-hbase4:35063] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-18 02:14:51,939 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,35063,1689646489808] 2023-07-18 02:14:51,939 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-18 02:14:51,939 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,45077,1689646489555] 2023-07-18 02:14:51,939 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,39557,1689646489998] 2023-07-18 02:14:51,939 INFO [RS:1;jenkins-hbase4:35063] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 02:14:51,942 DEBUG [RS:1;jenkins-hbase4:35063] regionserver.HRegionServer(1948): logDir=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/WALs/jenkins-hbase4.apache.org,35063,1689646489808 2023-07-18 02:14:51,956 DEBUG [RS:2;jenkins-hbase4:39557] zookeeper.ZKUtil(162): regionserver:39557-0x1017635d76e0003, quorum=127.0.0.1:54439, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39557,1689646489998 2023-07-18 02:14:51,957 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-18 02:14:51,957 DEBUG [RS:2;jenkins-hbase4:39557] zookeeper.ZKUtil(162): regionserver:39557-0x1017635d76e0003, quorum=127.0.0.1:54439, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35063,1689646489808 2023-07-18 02:14:51,958 DEBUG [RS:0;jenkins-hbase4:45077] zookeeper.ZKUtil(162): regionserver:45077-0x1017635d76e0001, quorum=127.0.0.1:54439, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39557,1689646489998 2023-07-18 02:14:51,958 DEBUG [RS:2;jenkins-hbase4:39557] zookeeper.ZKUtil(162): regionserver:39557-0x1017635d76e0003, quorum=127.0.0.1:54439, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45077,1689646489555 2023-07-18 02:14:51,959 DEBUG [RS:0;jenkins-hbase4:45077] zookeeper.ZKUtil(162): regionserver:45077-0x1017635d76e0001, quorum=127.0.0.1:54439, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35063,1689646489808 2023-07-18 02:14:51,960 DEBUG [RS:0;jenkins-hbase4:45077] zookeeper.ZKUtil(162): regionserver:45077-0x1017635d76e0001, quorum=127.0.0.1:54439, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45077,1689646489555 2023-07-18 02:14:51,960 DEBUG [RS:1;jenkins-hbase4:35063] zookeeper.ZKUtil(162): regionserver:35063-0x1017635d76e0002, quorum=127.0.0.1:54439, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39557,1689646489998 2023-07-18 02:14:51,961 DEBUG [RS:1;jenkins-hbase4:35063] zookeeper.ZKUtil(162): regionserver:35063-0x1017635d76e0002, quorum=127.0.0.1:54439, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35063,1689646489808 2023-07-18 02:14:51,962 DEBUG [RS:1;jenkins-hbase4:35063] zookeeper.ZKUtil(162): regionserver:35063-0x1017635d76e0002, quorum=127.0.0.1:54439, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45077,1689646489555 2023-07-18 02:14:51,971 DEBUG [RS:2;jenkins-hbase4:39557] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-18 02:14:51,971 DEBUG [RS:0;jenkins-hbase4:45077] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-18 02:14:51,971 DEBUG [RS:1;jenkins-hbase4:35063] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-18 02:14:51,974 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-18 02:14:51,984 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-18 02:14:51,985 INFO [RS:2;jenkins-hbase4:39557] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-18 02:14:51,985 INFO [RS:1;jenkins-hbase4:35063] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-18 02:14:51,986 INFO [RS:0;jenkins-hbase4:45077] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-18 02:14:52,012 INFO [RS:2;jenkins-hbase4:39557] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-18 02:14:52,012 INFO [RS:0;jenkins-hbase4:45077] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-18 02:14:52,013 INFO [RS:1;jenkins-hbase4:35063] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-18 02:14:52,019 INFO [RS:2;jenkins-hbase4:39557] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-18 02:14:52,019 INFO [RS:0;jenkins-hbase4:45077] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-18 02:14:52,020 INFO [RS:0;jenkins-hbase4:45077] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 02:14:52,019 INFO [RS:1;jenkins-hbase4:35063] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-18 02:14:52,021 INFO [RS:1;jenkins-hbase4:35063] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 02:14:52,020 INFO [RS:2;jenkins-hbase4:39557] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 02:14:52,027 INFO [RS:0;jenkins-hbase4:45077] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-18 02:14:52,031 INFO [RS:1;jenkins-hbase4:35063] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-18 02:14:52,032 INFO [RS:2;jenkins-hbase4:39557] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-18 02:14:52,043 INFO [RS:1;jenkins-hbase4:35063] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-18 02:14:52,043 INFO [RS:0;jenkins-hbase4:45077] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-18 02:14:52,043 INFO [RS:2;jenkins-hbase4:39557] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-18 02:14:52,043 DEBUG [RS:1;jenkins-hbase4:35063] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:14:52,044 DEBUG [RS:0;jenkins-hbase4:45077] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:14:52,044 DEBUG [RS:1;jenkins-hbase4:35063] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:14:52,044 DEBUG [RS:0;jenkins-hbase4:45077] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:14:52,044 DEBUG [RS:1;jenkins-hbase4:35063] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:14:52,044 DEBUG [RS:0;jenkins-hbase4:45077] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:14:52,044 DEBUG [RS:1;jenkins-hbase4:35063] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:14:52,044 DEBUG [RS:0;jenkins-hbase4:45077] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:14:52,044 DEBUG [RS:1;jenkins-hbase4:35063] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:14:52,044 DEBUG [RS:2;jenkins-hbase4:39557] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:14:52,044 DEBUG [RS:1;jenkins-hbase4:35063] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-18 02:14:52,044 DEBUG [RS:0;jenkins-hbase4:45077] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:14:52,044 DEBUG [RS:1;jenkins-hbase4:35063] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:14:52,044 DEBUG [RS:0;jenkins-hbase4:45077] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-18 02:14:52,044 DEBUG [RS:2;jenkins-hbase4:39557] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:14:52,045 DEBUG [RS:0;jenkins-hbase4:45077] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:14:52,044 DEBUG [RS:1;jenkins-hbase4:35063] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:14:52,045 DEBUG [RS:0;jenkins-hbase4:45077] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:14:52,045 DEBUG [RS:2;jenkins-hbase4:39557] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:14:52,045 DEBUG [RS:0;jenkins-hbase4:45077] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:14:52,045 DEBUG [RS:1;jenkins-hbase4:35063] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:14:52,045 DEBUG [RS:0;jenkins-hbase4:45077] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:14:52,045 DEBUG [RS:1;jenkins-hbase4:35063] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:14:52,045 DEBUG [RS:2;jenkins-hbase4:39557] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:14:52,045 DEBUG [RS:2;jenkins-hbase4:39557] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:14:52,045 DEBUG [RS:2;jenkins-hbase4:39557] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-18 02:14:52,046 DEBUG [RS:2;jenkins-hbase4:39557] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:14:52,047 DEBUG [RS:2;jenkins-hbase4:39557] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:14:52,047 DEBUG [RS:2;jenkins-hbase4:39557] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:14:52,047 DEBUG [RS:2;jenkins-hbase4:39557] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:14:52,050 INFO [RS:0;jenkins-hbase4:45077] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 02:14:52,051 INFO [RS:0;jenkins-hbase4:45077] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 02:14:52,051 INFO [RS:0;jenkins-hbase4:45077] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-18 02:14:52,053 INFO [RS:1;jenkins-hbase4:35063] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 02:14:52,053 INFO [RS:1;jenkins-hbase4:35063] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 02:14:52,053 INFO [RS:2;jenkins-hbase4:39557] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 02:14:52,053 INFO [RS:1;jenkins-hbase4:35063] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-18 02:14:52,053 INFO [RS:2;jenkins-hbase4:39557] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 02:14:52,053 INFO [RS:2;jenkins-hbase4:39557] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-18 02:14:52,071 INFO [RS:0;jenkins-hbase4:45077] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-18 02:14:52,072 INFO [RS:2;jenkins-hbase4:39557] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-18 02:14:52,076 INFO [RS:1;jenkins-hbase4:35063] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-18 02:14:52,076 INFO [RS:0;jenkins-hbase4:45077] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,45077,1689646489555-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 02:14:52,076 INFO [RS:1;jenkins-hbase4:35063] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35063,1689646489808-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 02:14:52,076 INFO [RS:2;jenkins-hbase4:39557] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,39557,1689646489998-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 02:14:52,109 INFO [RS:1;jenkins-hbase4:35063] regionserver.Replication(203): jenkins-hbase4.apache.org,35063,1689646489808 started 2023-07-18 02:14:52,110 INFO [RS:1;jenkins-hbase4:35063] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,35063,1689646489808, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:35063, sessionid=0x1017635d76e0002 2023-07-18 02:14:52,110 INFO [RS:2;jenkins-hbase4:39557] regionserver.Replication(203): jenkins-hbase4.apache.org,39557,1689646489998 started 2023-07-18 02:14:52,110 INFO [RS:2;jenkins-hbase4:39557] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,39557,1689646489998, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:39557, sessionid=0x1017635d76e0003 2023-07-18 02:14:52,110 INFO [RS:0;jenkins-hbase4:45077] regionserver.Replication(203): jenkins-hbase4.apache.org,45077,1689646489555 started 2023-07-18 02:14:52,110 DEBUG [RS:2;jenkins-hbase4:39557] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-18 02:14:52,110 DEBUG [RS:1;jenkins-hbase4:35063] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-18 02:14:52,111 DEBUG [RS:2;jenkins-hbase4:39557] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,39557,1689646489998 2023-07-18 02:14:52,111 INFO [RS:0;jenkins-hbase4:45077] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,45077,1689646489555, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:45077, sessionid=0x1017635d76e0001 2023-07-18 02:14:52,111 DEBUG [RS:2;jenkins-hbase4:39557] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,39557,1689646489998' 2023-07-18 02:14:52,111 DEBUG [RS:0;jenkins-hbase4:45077] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-18 02:14:52,111 DEBUG [RS:2;jenkins-hbase4:39557] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-18 02:14:52,111 DEBUG [RS:1;jenkins-hbase4:35063] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,35063,1689646489808 2023-07-18 02:14:52,111 DEBUG [RS:0;jenkins-hbase4:45077] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,45077,1689646489555 2023-07-18 02:14:52,115 DEBUG [RS:0;jenkins-hbase4:45077] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,45077,1689646489555' 2023-07-18 02:14:52,115 DEBUG [RS:0;jenkins-hbase4:45077] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-18 02:14:52,112 DEBUG [RS:1;jenkins-hbase4:35063] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,35063,1689646489808' 2023-07-18 02:14:52,115 DEBUG [RS:1;jenkins-hbase4:35063] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-18 02:14:52,116 DEBUG [RS:0;jenkins-hbase4:45077] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-18 02:14:52,117 DEBUG [RS:0;jenkins-hbase4:45077] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-18 02:14:52,117 DEBUG [RS:0;jenkins-hbase4:45077] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-18 02:14:52,117 DEBUG [RS:0;jenkins-hbase4:45077] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,45077,1689646489555 2023-07-18 02:14:52,117 DEBUG [RS:0;jenkins-hbase4:45077] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,45077,1689646489555' 2023-07-18 02:14:52,117 DEBUG [RS:0;jenkins-hbase4:45077] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-18 02:14:52,118 DEBUG [RS:0;jenkins-hbase4:45077] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-18 02:14:52,119 DEBUG [RS:1;jenkins-hbase4:35063] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-18 02:14:52,120 DEBUG [RS:2;jenkins-hbase4:39557] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-18 02:14:52,121 DEBUG [RS:1;jenkins-hbase4:35063] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-18 02:14:52,121 DEBUG [RS:1;jenkins-hbase4:35063] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-18 02:14:52,121 DEBUG [RS:0;jenkins-hbase4:45077] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-18 02:14:52,121 DEBUG [RS:1;jenkins-hbase4:35063] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,35063,1689646489808 2023-07-18 02:14:52,127 DEBUG [RS:1;jenkins-hbase4:35063] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,35063,1689646489808' 2023-07-18 02:14:52,127 DEBUG [RS:1;jenkins-hbase4:35063] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-18 02:14:52,127 DEBUG [RS:2;jenkins-hbase4:39557] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-18 02:14:52,127 DEBUG [RS:2;jenkins-hbase4:39557] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-18 02:14:52,127 DEBUG [RS:2;jenkins-hbase4:39557] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,39557,1689646489998 2023-07-18 02:14:52,127 DEBUG [RS:2;jenkins-hbase4:39557] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,39557,1689646489998' 2023-07-18 02:14:52,127 DEBUG [RS:2;jenkins-hbase4:39557] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-18 02:14:52,127 DEBUG [RS:1;jenkins-hbase4:35063] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-18 02:14:52,128 DEBUG [RS:1;jenkins-hbase4:35063] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-18 02:14:52,128 INFO [RS:1;jenkins-hbase4:35063] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-18 02:14:52,129 INFO [RS:1;jenkins-hbase4:35063] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-18 02:14:52,127 INFO [RS:0;jenkins-hbase4:45077] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-18 02:14:52,129 INFO [RS:0;jenkins-hbase4:45077] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-18 02:14:52,131 DEBUG [RS:2;jenkins-hbase4:39557] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-18 02:14:52,134 DEBUG [RS:2;jenkins-hbase4:39557] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-18 02:14:52,134 INFO [RS:2;jenkins-hbase4:39557] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-18 02:14:52,134 INFO [RS:2;jenkins-hbase4:39557] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-18 02:14:52,136 DEBUG [jenkins-hbase4:40909] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-18 02:14:52,157 DEBUG [jenkins-hbase4:40909] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 02:14:52,159 DEBUG [jenkins-hbase4:40909] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 02:14:52,159 DEBUG [jenkins-hbase4:40909] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 02:14:52,159 DEBUG [jenkins-hbase4:40909] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 02:14:52,159 DEBUG [jenkins-hbase4:40909] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 02:14:52,163 INFO [PEWorker-4] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,39557,1689646489998, state=OPENING 2023-07-18 02:14:52,173 DEBUG [PEWorker-4] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-18 02:14:52,175 DEBUG [Listener at localhost/38101-EventThread] zookeeper.ZKWatcher(600): master:40909-0x1017635d76e0000, quorum=127.0.0.1:54439, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 02:14:52,175 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-18 02:14:52,181 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,39557,1689646489998}] 2023-07-18 02:14:52,244 INFO [RS:2;jenkins-hbase4:39557] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C39557%2C1689646489998, suffix=, logDir=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/WALs/jenkins-hbase4.apache.org,39557,1689646489998, archiveDir=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/oldWALs, maxLogs=32 2023-07-18 02:14:52,244 INFO [RS:0;jenkins-hbase4:45077] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C45077%2C1689646489555, suffix=, logDir=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/WALs/jenkins-hbase4.apache.org,45077,1689646489555, archiveDir=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/oldWALs, maxLogs=32 2023-07-18 02:14:52,248 INFO [RS:1;jenkins-hbase4:35063] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C35063%2C1689646489808, suffix=, logDir=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/WALs/jenkins-hbase4.apache.org,35063,1689646489808, archiveDir=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/oldWALs, maxLogs=32 2023-07-18 02:14:52,291 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38365,DS-9de188ed-4aa0-40e3-be2d-fc8641659521,DISK] 2023-07-18 02:14:52,295 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33339,DS-bef9494d-281f-4e87-b04c-fe86fdcfb4dc,DISK] 2023-07-18 02:14:52,304 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34885,DS-afa6b23c-0172-447d-8546-c0b8f662d95b,DISK] 2023-07-18 02:14:52,309 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38365,DS-9de188ed-4aa0-40e3-be2d-fc8641659521,DISK] 2023-07-18 02:14:52,309 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34885,DS-afa6b23c-0172-447d-8546-c0b8f662d95b,DISK] 2023-07-18 02:14:52,315 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34885,DS-afa6b23c-0172-447d-8546-c0b8f662d95b,DISK] 2023-07-18 02:14:52,316 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38365,DS-9de188ed-4aa0-40e3-be2d-fc8641659521,DISK] 2023-07-18 02:14:52,316 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33339,DS-bef9494d-281f-4e87-b04c-fe86fdcfb4dc,DISK] 2023-07-18 02:14:52,316 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33339,DS-bef9494d-281f-4e87-b04c-fe86fdcfb4dc,DISK] 2023-07-18 02:14:52,335 INFO [RS:1;jenkins-hbase4:35063] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/WALs/jenkins-hbase4.apache.org,35063,1689646489808/jenkins-hbase4.apache.org%2C35063%2C1689646489808.1689646492256 2023-07-18 02:14:52,339 WARN [ReadOnlyZKClient-127.0.0.1:54439@0x2e3a8222] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-18 02:14:52,339 DEBUG [RS:1;jenkins-hbase4:35063] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34885,DS-afa6b23c-0172-447d-8546-c0b8f662d95b,DISK], DatanodeInfoWithStorage[127.0.0.1:38365,DS-9de188ed-4aa0-40e3-be2d-fc8641659521,DISK], DatanodeInfoWithStorage[127.0.0.1:33339,DS-bef9494d-281f-4e87-b04c-fe86fdcfb4dc,DISK]] 2023-07-18 02:14:52,342 INFO [RS:2;jenkins-hbase4:39557] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/WALs/jenkins-hbase4.apache.org,39557,1689646489998/jenkins-hbase4.apache.org%2C39557%2C1689646489998.1689646492255 2023-07-18 02:14:52,342 INFO [RS:0;jenkins-hbase4:45077] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/WALs/jenkins-hbase4.apache.org,45077,1689646489555/jenkins-hbase4.apache.org%2C45077%2C1689646489555.1689646492256 2023-07-18 02:14:52,350 DEBUG [RS:2;jenkins-hbase4:39557] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38365,DS-9de188ed-4aa0-40e3-be2d-fc8641659521,DISK], DatanodeInfoWithStorage[127.0.0.1:34885,DS-afa6b23c-0172-447d-8546-c0b8f662d95b,DISK], DatanodeInfoWithStorage[127.0.0.1:33339,DS-bef9494d-281f-4e87-b04c-fe86fdcfb4dc,DISK]] 2023-07-18 02:14:52,363 DEBUG [RS:0;jenkins-hbase4:45077] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38365,DS-9de188ed-4aa0-40e3-be2d-fc8641659521,DISK], DatanodeInfoWithStorage[127.0.0.1:34885,DS-afa6b23c-0172-447d-8546-c0b8f662d95b,DISK], DatanodeInfoWithStorage[127.0.0.1:33339,DS-bef9494d-281f-4e87-b04c-fe86fdcfb4dc,DISK]] 2023-07-18 02:14:52,396 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,39557,1689646489998 2023-07-18 02:14:52,398 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40909,1689646487536] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-18 02:14:52,400 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-18 02:14:52,402 INFO [RS-EventLoopGroup-5-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:59108, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-18 02:14:52,403 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:59100, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-18 02:14:52,404 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=39557] ipc.CallRunner(144): callId: 0 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:59100 deadline: 1689646552404, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase4.apache.org,39557,1689646489998 2023-07-18 02:14:52,418 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-18 02:14:52,419 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 02:14:52,424 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C39557%2C1689646489998.meta, suffix=.meta, logDir=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/WALs/jenkins-hbase4.apache.org,39557,1689646489998, archiveDir=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/oldWALs, maxLogs=32 2023-07-18 02:14:52,453 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34885,DS-afa6b23c-0172-447d-8546-c0b8f662d95b,DISK] 2023-07-18 02:14:52,454 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33339,DS-bef9494d-281f-4e87-b04c-fe86fdcfb4dc,DISK] 2023-07-18 02:14:52,458 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38365,DS-9de188ed-4aa0-40e3-be2d-fc8641659521,DISK] 2023-07-18 02:14:52,467 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/WALs/jenkins-hbase4.apache.org,39557,1689646489998/jenkins-hbase4.apache.org%2C39557%2C1689646489998.meta.1689646492426.meta 2023-07-18 02:14:52,468 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34885,DS-afa6b23c-0172-447d-8546-c0b8f662d95b,DISK], DatanodeInfoWithStorage[127.0.0.1:33339,DS-bef9494d-281f-4e87-b04c-fe86fdcfb4dc,DISK], DatanodeInfoWithStorage[127.0.0.1:38365,DS-9de188ed-4aa0-40e3-be2d-fc8641659521,DISK]] 2023-07-18 02:14:52,468 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-18 02:14:52,470 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-18 02:14:52,473 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-18 02:14:52,475 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-18 02:14:52,481 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-18 02:14:52,481 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 02:14:52,481 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-18 02:14:52,481 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-18 02:14:52,484 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-18 02:14:52,487 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/meta/1588230740/info 2023-07-18 02:14:52,487 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/meta/1588230740/info 2023-07-18 02:14:52,488 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-18 02:14:52,489 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 02:14:52,489 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-18 02:14:52,492 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/meta/1588230740/rep_barrier 2023-07-18 02:14:52,492 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/meta/1588230740/rep_barrier 2023-07-18 02:14:52,493 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-18 02:14:52,494 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 02:14:52,494 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-18 02:14:52,495 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/meta/1588230740/table 2023-07-18 02:14:52,496 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/meta/1588230740/table 2023-07-18 02:14:52,496 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-18 02:14:52,497 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 02:14:52,499 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/meta/1588230740 2023-07-18 02:14:52,502 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/meta/1588230740 2023-07-18 02:14:52,507 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-18 02:14:52,512 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-18 02:14:52,514 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11845420800, jitterRate=0.10319077968597412}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-18 02:14:52,515 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-18 02:14:52,528 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689646492392 2023-07-18 02:14:52,566 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,39557,1689646489998, state=OPEN 2023-07-18 02:14:52,569 DEBUG [Listener at localhost/38101-EventThread] zookeeper.ZKWatcher(600): master:40909-0x1017635d76e0000, quorum=127.0.0.1:54439, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-18 02:14:52,569 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-18 02:14:52,574 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-18 02:14:52,577 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-18 02:14:52,577 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-18 02:14:52,577 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,39557,1689646489998 in 388 msec 2023-07-18 02:14:52,583 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-18 02:14:52,583 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 622 msec 2023-07-18 02:14:52,591 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 1.0610 sec 2023-07-18 02:14:52,591 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689646492591, completionTime=-1 2023-07-18 02:14:52,591 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-18 02:14:52,591 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-18 02:14:52,660 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-18 02:14:52,660 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689646552660 2023-07-18 02:14:52,660 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689646612660 2023-07-18 02:14:52,660 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 68 msec 2023-07-18 02:14:52,687 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40909,1689646487536-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 02:14:52,687 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40909,1689646487536-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-18 02:14:52,687 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40909,1689646487536-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-18 02:14:52,691 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:40909, period=300000, unit=MILLISECONDS is enabled. 2023-07-18 02:14:52,692 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-18 02:14:52,700 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-18 02:14:52,719 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-18 02:14:52,721 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-18 02:14:52,731 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-18 02:14:52,734 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 02:14:52,738 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-18 02:14:52,758 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/hbase/namespace/fbc284aeb66f3eaca0bb2d67e73a56a3 2023-07-18 02:14:52,763 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/hbase/namespace/fbc284aeb66f3eaca0bb2d67e73a56a3 empty. 2023-07-18 02:14:52,764 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/hbase/namespace/fbc284aeb66f3eaca0bb2d67e73a56a3 2023-07-18 02:14:52,764 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-18 02:14:52,808 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-18 02:14:52,812 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => fbc284aeb66f3eaca0bb2d67e73a56a3, NAME => 'hbase:namespace,,1689646492720.fbc284aeb66f3eaca0bb2d67e73a56a3.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp 2023-07-18 02:14:52,844 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689646492720.fbc284aeb66f3eaca0bb2d67e73a56a3.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 02:14:52,844 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing fbc284aeb66f3eaca0bb2d67e73a56a3, disabling compactions & flushes 2023-07-18 02:14:52,844 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689646492720.fbc284aeb66f3eaca0bb2d67e73a56a3. 2023-07-18 02:14:52,844 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689646492720.fbc284aeb66f3eaca0bb2d67e73a56a3. 2023-07-18 02:14:52,844 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689646492720.fbc284aeb66f3eaca0bb2d67e73a56a3. after waiting 0 ms 2023-07-18 02:14:52,845 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689646492720.fbc284aeb66f3eaca0bb2d67e73a56a3. 2023-07-18 02:14:52,845 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689646492720.fbc284aeb66f3eaca0bb2d67e73a56a3. 2023-07-18 02:14:52,845 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for fbc284aeb66f3eaca0bb2d67e73a56a3: 2023-07-18 02:14:52,854 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-18 02:14:52,873 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689646492720.fbc284aeb66f3eaca0bb2d67e73a56a3.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689646492858"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689646492858"}]},"ts":"1689646492858"} 2023-07-18 02:14:52,910 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-18 02:14:52,915 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-18 02:14:52,921 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689646492915"}]},"ts":"1689646492915"} 2023-07-18 02:14:52,926 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40909,1689646487536] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 02:14:52,929 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-18 02:14:52,930 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40909,1689646487536] procedure2.ProcedureExecutor(1029): Stored pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-18 02:14:52,934 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 02:14:52,936 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 02:14:52,936 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 02:14:52,936 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 02:14:52,936 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 02:14:52,936 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-18 02:14:52,936 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 02:14:52,939 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=fbc284aeb66f3eaca0bb2d67e73a56a3, ASSIGN}] 2023-07-18 02:14:52,941 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=fbc284aeb66f3eaca0bb2d67e73a56a3, ASSIGN 2023-07-18 02:14:52,942 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/hbase/rsgroup/7925c60bcfbbace6dabdab5258b7cdde 2023-07-18 02:14:52,943 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/hbase/rsgroup/7925c60bcfbbace6dabdab5258b7cdde empty. 2023-07-18 02:14:52,944 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=fbc284aeb66f3eaca0bb2d67e73a56a3, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,35063,1689646489808; forceNewPlan=false, retain=false 2023-07-18 02:14:52,945 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/hbase/rsgroup/7925c60bcfbbace6dabdab5258b7cdde 2023-07-18 02:14:52,945 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-18 02:14:52,987 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-18 02:14:52,990 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 7925c60bcfbbace6dabdab5258b7cdde, NAME => 'hbase:rsgroup,,1689646492926.7925c60bcfbbace6dabdab5258b7cdde.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp 2023-07-18 02:14:53,037 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689646492926.7925c60bcfbbace6dabdab5258b7cdde.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 02:14:53,037 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing 7925c60bcfbbace6dabdab5258b7cdde, disabling compactions & flushes 2023-07-18 02:14:53,037 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689646492926.7925c60bcfbbace6dabdab5258b7cdde. 2023-07-18 02:14:53,038 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689646492926.7925c60bcfbbace6dabdab5258b7cdde. 2023-07-18 02:14:53,038 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689646492926.7925c60bcfbbace6dabdab5258b7cdde. after waiting 0 ms 2023-07-18 02:14:53,038 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689646492926.7925c60bcfbbace6dabdab5258b7cdde. 2023-07-18 02:14:53,038 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689646492926.7925c60bcfbbace6dabdab5258b7cdde. 2023-07-18 02:14:53,038 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for 7925c60bcfbbace6dabdab5258b7cdde: 2023-07-18 02:14:53,047 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-18 02:14:53,049 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689646492926.7925c60bcfbbace6dabdab5258b7cdde.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689646493049"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689646493049"}]},"ts":"1689646493049"} 2023-07-18 02:14:53,054 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-18 02:14:53,056 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-18 02:14:53,056 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689646493056"}]},"ts":"1689646493056"} 2023-07-18 02:14:53,066 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-18 02:14:53,071 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 02:14:53,071 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 02:14:53,071 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 02:14:53,071 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 02:14:53,071 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 02:14:53,072 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=7925c60bcfbbace6dabdab5258b7cdde, ASSIGN}] 2023-07-18 02:14:53,075 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=7925c60bcfbbace6dabdab5258b7cdde, ASSIGN 2023-07-18 02:14:53,077 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=7925c60bcfbbace6dabdab5258b7cdde, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,45077,1689646489555; forceNewPlan=false, retain=false 2023-07-18 02:14:53,078 INFO [jenkins-hbase4:40909] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-18 02:14:53,080 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=7925c60bcfbbace6dabdab5258b7cdde, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45077,1689646489555 2023-07-18 02:14:53,080 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=fbc284aeb66f3eaca0bb2d67e73a56a3, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,35063,1689646489808 2023-07-18 02:14:53,080 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689646492926.7925c60bcfbbace6dabdab5258b7cdde.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689646493080"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689646493080"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689646493080"}]},"ts":"1689646493080"} 2023-07-18 02:14:53,080 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689646492720.fbc284aeb66f3eaca0bb2d67e73a56a3.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689646493080"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689646493080"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689646493080"}]},"ts":"1689646493080"} 2023-07-18 02:14:53,084 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=6, state=RUNNABLE; OpenRegionProcedure fbc284aeb66f3eaca0bb2d67e73a56a3, server=jenkins-hbase4.apache.org,35063,1689646489808}] 2023-07-18 02:14:53,086 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=7, state=RUNNABLE; OpenRegionProcedure 7925c60bcfbbace6dabdab5258b7cdde, server=jenkins-hbase4.apache.org,45077,1689646489555}] 2023-07-18 02:14:53,239 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,35063,1689646489808 2023-07-18 02:14:53,240 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-18 02:14:53,241 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,45077,1689646489555 2023-07-18 02:14:53,242 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-18 02:14:53,247 INFO [RS-EventLoopGroup-4-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:60912, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-18 02:14:53,247 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:57592, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-18 02:14:53,256 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689646492720.fbc284aeb66f3eaca0bb2d67e73a56a3. 2023-07-18 02:14:53,256 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689646492926.7925c60bcfbbace6dabdab5258b7cdde. 2023-07-18 02:14:53,256 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => fbc284aeb66f3eaca0bb2d67e73a56a3, NAME => 'hbase:namespace,,1689646492720.fbc284aeb66f3eaca0bb2d67e73a56a3.', STARTKEY => '', ENDKEY => ''} 2023-07-18 02:14:53,256 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 7925c60bcfbbace6dabdab5258b7cdde, NAME => 'hbase:rsgroup,,1689646492926.7925c60bcfbbace6dabdab5258b7cdde.', STARTKEY => '', ENDKEY => ''} 2023-07-18 02:14:53,257 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace fbc284aeb66f3eaca0bb2d67e73a56a3 2023-07-18 02:14:53,257 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689646492720.fbc284aeb66f3eaca0bb2d67e73a56a3.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 02:14:53,257 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-18 02:14:53,257 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for fbc284aeb66f3eaca0bb2d67e73a56a3 2023-07-18 02:14:53,257 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689646492926.7925c60bcfbbace6dabdab5258b7cdde. service=MultiRowMutationService 2023-07-18 02:14:53,257 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for fbc284aeb66f3eaca0bb2d67e73a56a3 2023-07-18 02:14:53,258 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-18 02:14:53,258 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 7925c60bcfbbace6dabdab5258b7cdde 2023-07-18 02:14:53,258 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689646492926.7925c60bcfbbace6dabdab5258b7cdde.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 02:14:53,258 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 7925c60bcfbbace6dabdab5258b7cdde 2023-07-18 02:14:53,258 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 7925c60bcfbbace6dabdab5258b7cdde 2023-07-18 02:14:53,261 INFO [StoreOpener-fbc284aeb66f3eaca0bb2d67e73a56a3-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region fbc284aeb66f3eaca0bb2d67e73a56a3 2023-07-18 02:14:53,261 INFO [StoreOpener-7925c60bcfbbace6dabdab5258b7cdde-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 7925c60bcfbbace6dabdab5258b7cdde 2023-07-18 02:14:53,264 DEBUG [StoreOpener-fbc284aeb66f3eaca0bb2d67e73a56a3-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/namespace/fbc284aeb66f3eaca0bb2d67e73a56a3/info 2023-07-18 02:14:53,264 DEBUG [StoreOpener-fbc284aeb66f3eaca0bb2d67e73a56a3-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/namespace/fbc284aeb66f3eaca0bb2d67e73a56a3/info 2023-07-18 02:14:53,264 DEBUG [StoreOpener-7925c60bcfbbace6dabdab5258b7cdde-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/rsgroup/7925c60bcfbbace6dabdab5258b7cdde/m 2023-07-18 02:14:53,264 DEBUG [StoreOpener-7925c60bcfbbace6dabdab5258b7cdde-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/rsgroup/7925c60bcfbbace6dabdab5258b7cdde/m 2023-07-18 02:14:53,264 INFO [StoreOpener-fbc284aeb66f3eaca0bb2d67e73a56a3-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region fbc284aeb66f3eaca0bb2d67e73a56a3 columnFamilyName info 2023-07-18 02:14:53,264 INFO [StoreOpener-7925c60bcfbbace6dabdab5258b7cdde-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 7925c60bcfbbace6dabdab5258b7cdde columnFamilyName m 2023-07-18 02:14:53,265 INFO [StoreOpener-fbc284aeb66f3eaca0bb2d67e73a56a3-1] regionserver.HStore(310): Store=fbc284aeb66f3eaca0bb2d67e73a56a3/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 02:14:53,266 INFO [StoreOpener-7925c60bcfbbace6dabdab5258b7cdde-1] regionserver.HStore(310): Store=7925c60bcfbbace6dabdab5258b7cdde/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 02:14:53,268 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/namespace/fbc284aeb66f3eaca0bb2d67e73a56a3 2023-07-18 02:14:53,268 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/rsgroup/7925c60bcfbbace6dabdab5258b7cdde 2023-07-18 02:14:53,268 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/namespace/fbc284aeb66f3eaca0bb2d67e73a56a3 2023-07-18 02:14:53,269 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/rsgroup/7925c60bcfbbace6dabdab5258b7cdde 2023-07-18 02:14:53,273 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 7925c60bcfbbace6dabdab5258b7cdde 2023-07-18 02:14:53,274 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for fbc284aeb66f3eaca0bb2d67e73a56a3 2023-07-18 02:14:53,277 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/rsgroup/7925c60bcfbbace6dabdab5258b7cdde/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 02:14:53,278 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/namespace/fbc284aeb66f3eaca0bb2d67e73a56a3/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 02:14:53,278 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 7925c60bcfbbace6dabdab5258b7cdde; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@a22c579, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 02:14:53,278 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened fbc284aeb66f3eaca0bb2d67e73a56a3; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10863848320, jitterRate=0.011774718761444092}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 02:14:53,278 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 7925c60bcfbbace6dabdab5258b7cdde: 2023-07-18 02:14:53,278 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for fbc284aeb66f3eaca0bb2d67e73a56a3: 2023-07-18 02:14:53,280 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689646492720.fbc284aeb66f3eaca0bb2d67e73a56a3., pid=8, masterSystemTime=1689646493239 2023-07-18 02:14:53,282 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689646492926.7925c60bcfbbace6dabdab5258b7cdde., pid=9, masterSystemTime=1689646493241 2023-07-18 02:14:53,286 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689646492720.fbc284aeb66f3eaca0bb2d67e73a56a3. 2023-07-18 02:14:53,287 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689646492720.fbc284aeb66f3eaca0bb2d67e73a56a3. 2023-07-18 02:14:53,288 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689646492926.7925c60bcfbbace6dabdab5258b7cdde. 2023-07-18 02:14:53,288 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=fbc284aeb66f3eaca0bb2d67e73a56a3, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,35063,1689646489808 2023-07-18 02:14:53,289 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689646492926.7925c60bcfbbace6dabdab5258b7cdde. 2023-07-18 02:14:53,289 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689646492720.fbc284aeb66f3eaca0bb2d67e73a56a3.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689646493287"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689646493287"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689646493287"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689646493287"}]},"ts":"1689646493287"} 2023-07-18 02:14:53,290 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=7925c60bcfbbace6dabdab5258b7cdde, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,45077,1689646489555 2023-07-18 02:14:53,291 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689646492926.7925c60bcfbbace6dabdab5258b7cdde.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689646493290"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689646493290"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689646493290"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689646493290"}]},"ts":"1689646493290"} 2023-07-18 02:14:53,300 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=6 2023-07-18 02:14:53,301 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=6, state=SUCCESS; OpenRegionProcedure fbc284aeb66f3eaca0bb2d67e73a56a3, server=jenkins-hbase4.apache.org,35063,1689646489808 in 208 msec 2023-07-18 02:14:53,303 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=7 2023-07-18 02:14:53,304 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=7, state=SUCCESS; OpenRegionProcedure 7925c60bcfbbace6dabdab5258b7cdde, server=jenkins-hbase4.apache.org,45077,1689646489555 in 210 msec 2023-07-18 02:14:53,307 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=4 2023-07-18 02:14:53,307 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=fbc284aeb66f3eaca0bb2d67e73a56a3, ASSIGN in 362 msec 2023-07-18 02:14:53,309 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-18 02:14:53,309 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=5 2023-07-18 02:14:53,309 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=5, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=7925c60bcfbbace6dabdab5258b7cdde, ASSIGN in 232 msec 2023-07-18 02:14:53,309 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689646493309"}]},"ts":"1689646493309"} 2023-07-18 02:14:53,310 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-18 02:14:53,311 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689646493310"}]},"ts":"1689646493310"} 2023-07-18 02:14:53,312 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-18 02:14:53,313 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-18 02:14:53,316 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-18 02:14:53,318 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-18 02:14:53,321 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 594 msec 2023-07-18 02:14:53,322 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=5, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 392 msec 2023-07-18 02:14:53,335 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40909-0x1017635d76e0000, quorum=127.0.0.1:54439, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-18 02:14:53,336 DEBUG [Listener at localhost/38101-EventThread] zookeeper.ZKWatcher(600): master:40909-0x1017635d76e0000, quorum=127.0.0.1:54439, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-18 02:14:53,336 DEBUG [Listener at localhost/38101-EventThread] zookeeper.ZKWatcher(600): master:40909-0x1017635d76e0000, quorum=127.0.0.1:54439, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 02:14:53,356 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40909,1689646487536] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-18 02:14:53,360 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-18 02:14:53,363 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:57602, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-18 02:14:53,366 INFO [RS-EventLoopGroup-4-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:60920, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-18 02:14:53,367 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40909,1689646487536] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-18 02:14:53,367 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40909,1689646487536] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-18 02:14:53,389 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-18 02:14:53,407 DEBUG [Listener at localhost/38101-EventThread] zookeeper.ZKWatcher(600): master:40909-0x1017635d76e0000, quorum=127.0.0.1:54439, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-18 02:14:53,415 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 38 msec 2023-07-18 02:14:53,422 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-18 02:14:53,438 DEBUG [Listener at localhost/38101-EventThread] zookeeper.ZKWatcher(600): master:40909-0x1017635d76e0000, quorum=127.0.0.1:54439, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-18 02:14:53,451 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 22 msec 2023-07-18 02:14:53,452 DEBUG [Listener at localhost/38101-EventThread] zookeeper.ZKWatcher(600): master:40909-0x1017635d76e0000, quorum=127.0.0.1:54439, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 02:14:53,452 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40909,1689646487536] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:14:53,455 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40909,1689646487536] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-18 02:14:53,462 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40909,1689646487536] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-18 02:14:53,463 DEBUG [Listener at localhost/38101-EventThread] zookeeper.ZKWatcher(600): master:40909-0x1017635d76e0000, quorum=127.0.0.1:54439, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-18 02:14:53,467 DEBUG [Listener at localhost/38101-EventThread] zookeeper.ZKWatcher(600): master:40909-0x1017635d76e0000, quorum=127.0.0.1:54439, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-18 02:14:53,467 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 3.210sec 2023-07-18 02:14:53,470 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-18 02:14:53,472 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-18 02:14:53,472 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-18 02:14:53,473 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40909,1689646487536-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-18 02:14:53,474 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40909,1689646487536-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-18 02:14:53,482 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-18 02:14:53,496 DEBUG [Listener at localhost/38101] zookeeper.ReadOnlyZKClient(139): Connect 0x4b32111a to 127.0.0.1:54439 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 02:14:53,502 DEBUG [Listener at localhost/38101] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@227ce6b3, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 02:14:53,517 DEBUG [hconnection-0x422d8bf2-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-18 02:14:53,529 INFO [RS-EventLoopGroup-5-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:59112, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-18 02:14:53,540 INFO [Listener at localhost/38101] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,40909,1689646487536 2023-07-18 02:14:53,541 INFO [Listener at localhost/38101] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 02:14:53,551 DEBUG [Listener at localhost/38101] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-18 02:14:53,554 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:39122, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-18 02:14:53,569 DEBUG [Listener at localhost/38101-EventThread] zookeeper.ZKWatcher(600): master:40909-0x1017635d76e0000, quorum=127.0.0.1:54439, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-18 02:14:53,569 DEBUG [Listener at localhost/38101-EventThread] zookeeper.ZKWatcher(600): master:40909-0x1017635d76e0000, quorum=127.0.0.1:54439, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 02:14:53,570 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-18 02:14:53,575 DEBUG [Listener at localhost/38101] zookeeper.ReadOnlyZKClient(139): Connect 0x11ddf8cf to 127.0.0.1:54439 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 02:14:53,580 DEBUG [Listener at localhost/38101] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6a74af9e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 02:14:53,580 INFO [Listener at localhost/38101] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:54439 2023-07-18 02:14:53,584 DEBUG [Listener at localhost/38101-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:54439, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 02:14:53,585 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x1017635d76e000a connected 2023-07-18 02:14:53,619 INFO [Listener at localhost/38101] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testTableMoveTruncateAndDrop Thread=424, OpenFileDescriptor=681, MaxFileDescriptor=60000, SystemLoadAverage=411, ProcessCount=172, AvailableMemoryMB=3436 2023-07-18 02:14:53,622 INFO [Listener at localhost/38101] rsgroup.TestRSGroupsBase(132): testTableMoveTruncateAndDrop 2023-07-18 02:14:53,654 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:14:53,656 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:14:53,703 INFO [Listener at localhost/38101] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-18 02:14:53,716 INFO [Listener at localhost/38101] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-18 02:14:53,716 INFO [Listener at localhost/38101] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 02:14:53,717 INFO [Listener at localhost/38101] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-18 02:14:53,717 INFO [Listener at localhost/38101] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-18 02:14:53,717 INFO [Listener at localhost/38101] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 02:14:53,717 INFO [Listener at localhost/38101] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-18 02:14:53,717 INFO [Listener at localhost/38101] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-18 02:14:53,763 INFO [Listener at localhost/38101] ipc.NettyRpcServer(120): Bind to /172.31.14.131:43645 2023-07-18 02:14:53,765 INFO [Listener at localhost/38101] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-18 02:14:53,768 DEBUG [Listener at localhost/38101] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-18 02:14:53,778 INFO [Listener at localhost/38101] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 02:14:53,786 INFO [Listener at localhost/38101] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 02:14:53,792 INFO [Listener at localhost/38101] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:43645 connecting to ZooKeeper ensemble=127.0.0.1:54439 2023-07-18 02:14:53,798 DEBUG [Listener at localhost/38101-EventThread] zookeeper.ZKWatcher(600): regionserver:436450x0, quorum=127.0.0.1:54439, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 02:14:53,800 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:43645-0x1017635d76e000b connected 2023-07-18 02:14:53,800 DEBUG [Listener at localhost/38101] zookeeper.ZKUtil(162): regionserver:43645-0x1017635d76e000b, quorum=127.0.0.1:54439, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-18 02:14:53,801 DEBUG [Listener at localhost/38101] zookeeper.ZKUtil(162): regionserver:43645-0x1017635d76e000b, quorum=127.0.0.1:54439, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-18 02:14:53,802 DEBUG [Listener at localhost/38101] zookeeper.ZKUtil(164): regionserver:43645-0x1017635d76e000b, quorum=127.0.0.1:54439, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-18 02:14:53,803 DEBUG [Listener at localhost/38101] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=43645 2023-07-18 02:14:53,803 DEBUG [Listener at localhost/38101] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=43645 2023-07-18 02:14:53,803 DEBUG [Listener at localhost/38101] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=43645 2023-07-18 02:14:53,808 DEBUG [Listener at localhost/38101] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=43645 2023-07-18 02:14:53,810 DEBUG [Listener at localhost/38101] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=43645 2023-07-18 02:14:53,813 INFO [Listener at localhost/38101] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-18 02:14:53,813 INFO [Listener at localhost/38101] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-18 02:14:53,814 INFO [Listener at localhost/38101] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-18 02:14:53,814 INFO [Listener at localhost/38101] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-18 02:14:53,814 INFO [Listener at localhost/38101] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-18 02:14:53,815 INFO [Listener at localhost/38101] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-18 02:14:53,815 INFO [Listener at localhost/38101] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-18 02:14:53,815 INFO [Listener at localhost/38101] http.HttpServer(1146): Jetty bound to port 35389 2023-07-18 02:14:53,816 INFO [Listener at localhost/38101] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 02:14:53,820 INFO [Listener at localhost/38101] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 02:14:53,820 INFO [Listener at localhost/38101] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2474d7bd{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d9e1427-d9ef-a78e-d989-6465a7eb0c3a/hadoop.log.dir/,AVAILABLE} 2023-07-18 02:14:53,821 INFO [Listener at localhost/38101] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 02:14:53,821 INFO [Listener at localhost/38101] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@70ac5277{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-18 02:14:53,955 INFO [Listener at localhost/38101] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-18 02:14:53,956 INFO [Listener at localhost/38101] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-18 02:14:53,956 INFO [Listener at localhost/38101] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-18 02:14:53,956 INFO [Listener at localhost/38101] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-18 02:14:53,957 INFO [Listener at localhost/38101] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 02:14:53,959 INFO [Listener at localhost/38101] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@e9c768a{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d9e1427-d9ef-a78e-d989-6465a7eb0c3a/java.io.tmpdir/jetty-0_0_0_0-35389-hbase-server-2_4_18-SNAPSHOT_jar-_-any-6179629254609022692/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-18 02:14:53,960 INFO [Listener at localhost/38101] server.AbstractConnector(333): Started ServerConnector@53ac63a1{HTTP/1.1, (http/1.1)}{0.0.0.0:35389} 2023-07-18 02:14:53,961 INFO [Listener at localhost/38101] server.Server(415): Started @12172ms 2023-07-18 02:14:53,964 INFO [RS:3;jenkins-hbase4:43645] regionserver.HRegionServer(951): ClusterId : 6a927052-2b6c-47ef-86d7-463ca10625a2 2023-07-18 02:14:53,967 DEBUG [RS:3;jenkins-hbase4:43645] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-18 02:14:53,969 DEBUG [RS:3;jenkins-hbase4:43645] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-18 02:14:53,969 DEBUG [RS:3;jenkins-hbase4:43645] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-18 02:14:53,972 DEBUG [RS:3;jenkins-hbase4:43645] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-18 02:14:53,974 DEBUG [RS:3;jenkins-hbase4:43645] zookeeper.ReadOnlyZKClient(139): Connect 0x1afe09e6 to 127.0.0.1:54439 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 02:14:53,985 DEBUG [RS:3;jenkins-hbase4:43645] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1439e4c9, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 02:14:53,986 DEBUG [RS:3;jenkins-hbase4:43645] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@10066296, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-18 02:14:54,000 DEBUG [RS:3;jenkins-hbase4:43645] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase4:43645 2023-07-18 02:14:54,000 INFO [RS:3;jenkins-hbase4:43645] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-18 02:14:54,000 INFO [RS:3;jenkins-hbase4:43645] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-18 02:14:54,000 DEBUG [RS:3;jenkins-hbase4:43645] regionserver.HRegionServer(1022): About to register with Master. 2023-07-18 02:14:54,001 INFO [RS:3;jenkins-hbase4:43645] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,40909,1689646487536 with isa=jenkins-hbase4.apache.org/172.31.14.131:43645, startcode=1689646493716 2023-07-18 02:14:54,001 DEBUG [RS:3;jenkins-hbase4:43645] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-18 02:14:54,006 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:38831, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.3 (auth:SIMPLE), service=RegionServerStatusService 2023-07-18 02:14:54,007 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=40909] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,43645,1689646493716 2023-07-18 02:14:54,007 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40909,1689646487536] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-18 02:14:54,008 DEBUG [RS:3;jenkins-hbase4:43645] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7 2023-07-18 02:14:54,008 DEBUG [RS:3;jenkins-hbase4:43645] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:45101 2023-07-18 02:14:54,008 DEBUG [RS:3;jenkins-hbase4:43645] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=42641 2023-07-18 02:14:54,013 DEBUG [Listener at localhost/38101-EventThread] zookeeper.ZKWatcher(600): regionserver:35063-0x1017635d76e0002, quorum=127.0.0.1:54439, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 02:14:54,013 DEBUG [Listener at localhost/38101-EventThread] zookeeper.ZKWatcher(600): regionserver:45077-0x1017635d76e0001, quorum=127.0.0.1:54439, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 02:14:54,013 DEBUG [Listener at localhost/38101-EventThread] zookeeper.ZKWatcher(600): master:40909-0x1017635d76e0000, quorum=127.0.0.1:54439, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 02:14:54,013 DEBUG [Listener at localhost/38101-EventThread] zookeeper.ZKWatcher(600): regionserver:39557-0x1017635d76e0003, quorum=127.0.0.1:54439, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 02:14:54,014 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40909,1689646487536] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:14:54,015 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,43645,1689646493716] 2023-07-18 02:14:54,015 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40909,1689646487536] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-18 02:14:54,015 DEBUG [RS:3;jenkins-hbase4:43645] zookeeper.ZKUtil(162): regionserver:43645-0x1017635d76e000b, quorum=127.0.0.1:54439, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43645,1689646493716 2023-07-18 02:14:54,015 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:45077-0x1017635d76e0001, quorum=127.0.0.1:54439, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39557,1689646489998 2023-07-18 02:14:54,015 WARN [RS:3;jenkins-hbase4:43645] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-18 02:14:54,015 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39557-0x1017635d76e0003, quorum=127.0.0.1:54439, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39557,1689646489998 2023-07-18 02:14:54,015 INFO [RS:3;jenkins-hbase4:43645] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 02:14:54,015 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35063-0x1017635d76e0002, quorum=127.0.0.1:54439, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39557,1689646489998 2023-07-18 02:14:54,016 DEBUG [RS:3;jenkins-hbase4:43645] regionserver.HRegionServer(1948): logDir=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/WALs/jenkins-hbase4.apache.org,43645,1689646493716 2023-07-18 02:14:54,023 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40909,1689646487536] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-18 02:14:54,023 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:45077-0x1017635d76e0001, quorum=127.0.0.1:54439, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35063,1689646489808 2023-07-18 02:14:54,023 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35063-0x1017635d76e0002, quorum=127.0.0.1:54439, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35063,1689646489808 2023-07-18 02:14:54,023 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39557-0x1017635d76e0003, quorum=127.0.0.1:54439, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35063,1689646489808 2023-07-18 02:14:54,024 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:45077-0x1017635d76e0001, quorum=127.0.0.1:54439, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43645,1689646493716 2023-07-18 02:14:54,024 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35063-0x1017635d76e0002, quorum=127.0.0.1:54439, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43645,1689646493716 2023-07-18 02:14:54,025 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39557-0x1017635d76e0003, quorum=127.0.0.1:54439, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43645,1689646493716 2023-07-18 02:14:54,025 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35063-0x1017635d76e0002, quorum=127.0.0.1:54439, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45077,1689646489555 2023-07-18 02:14:54,026 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:45077-0x1017635d76e0001, quorum=127.0.0.1:54439, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45077,1689646489555 2023-07-18 02:14:54,027 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39557-0x1017635d76e0003, quorum=127.0.0.1:54439, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45077,1689646489555 2023-07-18 02:14:54,032 DEBUG [RS:3;jenkins-hbase4:43645] zookeeper.ZKUtil(162): regionserver:43645-0x1017635d76e000b, quorum=127.0.0.1:54439, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39557,1689646489998 2023-07-18 02:14:54,032 DEBUG [RS:3;jenkins-hbase4:43645] zookeeper.ZKUtil(162): regionserver:43645-0x1017635d76e000b, quorum=127.0.0.1:54439, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35063,1689646489808 2023-07-18 02:14:54,033 DEBUG [RS:3;jenkins-hbase4:43645] zookeeper.ZKUtil(162): regionserver:43645-0x1017635d76e000b, quorum=127.0.0.1:54439, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43645,1689646493716 2023-07-18 02:14:54,033 DEBUG [RS:3;jenkins-hbase4:43645] zookeeper.ZKUtil(162): regionserver:43645-0x1017635d76e000b, quorum=127.0.0.1:54439, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45077,1689646489555 2023-07-18 02:14:54,034 DEBUG [RS:3;jenkins-hbase4:43645] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-18 02:14:54,035 INFO [RS:3;jenkins-hbase4:43645] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-18 02:14:54,038 INFO [RS:3;jenkins-hbase4:43645] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-18 02:14:54,038 INFO [RS:3;jenkins-hbase4:43645] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-18 02:14:54,039 INFO [RS:3;jenkins-hbase4:43645] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 02:14:54,039 INFO [RS:3;jenkins-hbase4:43645] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-18 02:14:54,042 INFO [RS:3;jenkins-hbase4:43645] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-18 02:14:54,042 DEBUG [RS:3;jenkins-hbase4:43645] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:14:54,042 DEBUG [RS:3;jenkins-hbase4:43645] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:14:54,042 DEBUG [RS:3;jenkins-hbase4:43645] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:14:54,043 DEBUG [RS:3;jenkins-hbase4:43645] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:14:54,043 DEBUG [RS:3;jenkins-hbase4:43645] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:14:54,043 DEBUG [RS:3;jenkins-hbase4:43645] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-18 02:14:54,043 DEBUG [RS:3;jenkins-hbase4:43645] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:14:54,043 DEBUG [RS:3;jenkins-hbase4:43645] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:14:54,043 DEBUG [RS:3;jenkins-hbase4:43645] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:14:54,043 DEBUG [RS:3;jenkins-hbase4:43645] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:14:54,049 INFO [RS:3;jenkins-hbase4:43645] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 02:14:54,049 INFO [RS:3;jenkins-hbase4:43645] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 02:14:54,049 INFO [RS:3;jenkins-hbase4:43645] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-18 02:14:54,061 INFO [RS:3;jenkins-hbase4:43645] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-18 02:14:54,061 INFO [RS:3;jenkins-hbase4:43645] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,43645,1689646493716-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 02:14:54,071 INFO [RS:3;jenkins-hbase4:43645] regionserver.Replication(203): jenkins-hbase4.apache.org,43645,1689646493716 started 2023-07-18 02:14:54,071 INFO [RS:3;jenkins-hbase4:43645] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,43645,1689646493716, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:43645, sessionid=0x1017635d76e000b 2023-07-18 02:14:54,071 DEBUG [RS:3;jenkins-hbase4:43645] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-18 02:14:54,072 DEBUG [RS:3;jenkins-hbase4:43645] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,43645,1689646493716 2023-07-18 02:14:54,072 DEBUG [RS:3;jenkins-hbase4:43645] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,43645,1689646493716' 2023-07-18 02:14:54,072 DEBUG [RS:3;jenkins-hbase4:43645] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-18 02:14:54,072 DEBUG [RS:3;jenkins-hbase4:43645] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-18 02:14:54,072 DEBUG [RS:3;jenkins-hbase4:43645] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-18 02:14:54,073 DEBUG [RS:3;jenkins-hbase4:43645] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-18 02:14:54,073 DEBUG [RS:3;jenkins-hbase4:43645] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,43645,1689646493716 2023-07-18 02:14:54,073 DEBUG [RS:3;jenkins-hbase4:43645] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,43645,1689646493716' 2023-07-18 02:14:54,073 DEBUG [RS:3;jenkins-hbase4:43645] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-18 02:14:54,073 DEBUG [RS:3;jenkins-hbase4:43645] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-18 02:14:54,073 DEBUG [RS:3;jenkins-hbase4:43645] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-18 02:14:54,074 INFO [RS:3;jenkins-hbase4:43645] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-18 02:14:54,074 INFO [RS:3;jenkins-hbase4:43645] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-18 02:14:54,077 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 02:14:54,082 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:14:54,082 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 02:14:54,084 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 02:14:54,087 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 02:14:54,090 DEBUG [hconnection-0x2e79eb29-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-18 02:14:54,093 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:59114, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-18 02:14:54,097 DEBUG [hconnection-0x2e79eb29-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-18 02:14:54,100 INFO [RS-EventLoopGroup-3-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:57606, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-18 02:14:54,103 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:14:54,103 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:14:54,112 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40909] to rsgroup master 2023-07-18 02:14:54,112 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40909 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 02:14:54,112 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] ipc.CallRunner(144): callId: 20 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:39122 deadline: 1689647694111, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40909 is either offline or it does not exist. 2023-07-18 02:14:54,113 WARN [Listener at localhost/38101] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40909 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40909 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 02:14:54,115 INFO [Listener at localhost/38101] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 02:14:54,117 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:14:54,117 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:14:54,117 INFO [Listener at localhost/38101] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35063, jenkins-hbase4.apache.org:39557, jenkins-hbase4.apache.org:43645, jenkins-hbase4.apache.org:45077], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 02:14:54,123 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 02:14:54,123 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 02:14:54,125 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 02:14:54,125 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 02:14:54,126 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testTableMoveTruncateAndDrop_1141100661 2023-07-18 02:14:54,130 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1141100661 2023-07-18 02:14:54,132 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:14:54,133 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 02:14:54,133 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 02:14:54,137 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 02:14:54,140 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:14:54,141 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:14:54,144 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:35063, jenkins-hbase4.apache.org:39557] to rsgroup Group_testTableMoveTruncateAndDrop_1141100661 2023-07-18 02:14:54,147 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:14:54,148 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1141100661 2023-07-18 02:14:54,149 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 02:14:54,149 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 02:14:54,153 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(238): Moving server region fbc284aeb66f3eaca0bb2d67e73a56a3, which do not belong to RSGroup Group_testTableMoveTruncateAndDrop_1141100661 2023-07-18 02:14:54,155 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=fbc284aeb66f3eaca0bb2d67e73a56a3, REOPEN/MOVE 2023-07-18 02:14:54,156 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=fbc284aeb66f3eaca0bb2d67e73a56a3, REOPEN/MOVE 2023-07-18 02:14:54,157 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(238): Moving server region 1588230740, which do not belong to RSGroup Group_testTableMoveTruncateAndDrop_1141100661 2023-07-18 02:14:54,158 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=fbc284aeb66f3eaca0bb2d67e73a56a3, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,35063,1689646489808 2023-07-18 02:14:54,158 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689646492720.fbc284aeb66f3eaca0bb2d67e73a56a3.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689646494158"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689646494158"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689646494158"}]},"ts":"1689646494158"} 2023-07-18 02:14:54,158 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] procedure2.ProcedureExecutor(1029): Stored pid=13, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-18 02:14:54,159 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(286): Moving 2 region(s) to group default, current retry=0 2023-07-18 02:14:54,159 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-18 02:14:54,161 INFO [PEWorker-2] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,39557,1689646489998, state=CLOSING 2023-07-18 02:14:54,163 DEBUG [Listener at localhost/38101-EventThread] zookeeper.ZKWatcher(600): master:40909-0x1017635d76e0000, quorum=127.0.0.1:54439, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-18 02:14:54,163 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-18 02:14:54,163 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=12, state=RUNNABLE; CloseRegionProcedure fbc284aeb66f3eaca0bb2d67e73a56a3, server=jenkins-hbase4.apache.org,35063,1689646489808}] 2023-07-18 02:14:54,163 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=15, ppid=13, state=RUNNABLE; CloseRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,39557,1689646489998}] 2023-07-18 02:14:54,171 DEBUG [PEWorker-2] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=14, ppid=12, state=RUNNABLE; CloseRegionProcedure fbc284aeb66f3eaca0bb2d67e73a56a3, server=jenkins-hbase4.apache.org,35063,1689646489808 2023-07-18 02:14:54,177 INFO [RS:3;jenkins-hbase4:43645] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C43645%2C1689646493716, suffix=, logDir=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/WALs/jenkins-hbase4.apache.org,43645,1689646493716, archiveDir=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/oldWALs, maxLogs=32 2023-07-18 02:14:54,205 DEBUG [RS-EventLoopGroup-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33339,DS-bef9494d-281f-4e87-b04c-fe86fdcfb4dc,DISK] 2023-07-18 02:14:54,205 DEBUG [RS-EventLoopGroup-7-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38365,DS-9de188ed-4aa0-40e3-be2d-fc8641659521,DISK] 2023-07-18 02:14:54,206 DEBUG [RS-EventLoopGroup-7-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34885,DS-afa6b23c-0172-447d-8546-c0b8f662d95b,DISK] 2023-07-18 02:14:54,212 INFO [RS:3;jenkins-hbase4:43645] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/WALs/jenkins-hbase4.apache.org,43645,1689646493716/jenkins-hbase4.apache.org%2C43645%2C1689646493716.1689646494178 2023-07-18 02:14:54,212 DEBUG [RS:3;jenkins-hbase4:43645] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33339,DS-bef9494d-281f-4e87-b04c-fe86fdcfb4dc,DISK], DatanodeInfoWithStorage[127.0.0.1:34885,DS-afa6b23c-0172-447d-8546-c0b8f662d95b,DISK], DatanodeInfoWithStorage[127.0.0.1:38365,DS-9de188ed-4aa0-40e3-be2d-fc8641659521,DISK]] 2023-07-18 02:14:54,332 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 1588230740 2023-07-18 02:14:54,333 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-18 02:14:54,333 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-18 02:14:54,333 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-18 02:14:54,333 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-18 02:14:54,333 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-18 02:14:54,334 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=2.86 KB heapSize=5.59 KB 2023-07-18 02:14:54,458 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.68 KB at sequenceid=15 (bloomFilter=false), to=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/meta/1588230740/.tmp/info/f6975690ee324060b18de846d256e046 2023-07-18 02:14:54,543 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=184 B at sequenceid=15 (bloomFilter=false), to=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/meta/1588230740/.tmp/table/8aaa79afa8164b0582eb69bd2cec2d06 2023-07-18 02:14:54,556 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/meta/1588230740/.tmp/info/f6975690ee324060b18de846d256e046 as hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/meta/1588230740/info/f6975690ee324060b18de846d256e046 2023-07-18 02:14:54,566 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/meta/1588230740/info/f6975690ee324060b18de846d256e046, entries=21, sequenceid=15, filesize=7.1 K 2023-07-18 02:14:54,569 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/meta/1588230740/.tmp/table/8aaa79afa8164b0582eb69bd2cec2d06 as hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/meta/1588230740/table/8aaa79afa8164b0582eb69bd2cec2d06 2023-07-18 02:14:54,585 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/meta/1588230740/table/8aaa79afa8164b0582eb69bd2cec2d06, entries=4, sequenceid=15, filesize=4.8 K 2023-07-18 02:14:54,588 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~2.86 KB/2924, heapSize ~5.30 KB/5432, currentSize=0 B/0 for 1588230740 in 254ms, sequenceid=15, compaction requested=false 2023-07-18 02:14:54,590 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-18 02:14:54,602 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/meta/1588230740/recovered.edits/18.seqid, newMaxSeqId=18, maxSeqId=1 2023-07-18 02:14:54,603 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-18 02:14:54,604 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-18 02:14:54,604 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-18 02:14:54,604 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 1588230740 move to jenkins-hbase4.apache.org,43645,1689646493716 record at close sequenceid=15 2023-07-18 02:14:54,607 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 1588230740 2023-07-18 02:14:54,607 WARN [PEWorker-4] zookeeper.MetaTableLocator(225): Tried to set null ServerName in hbase:meta; skipping -- ServerName required 2023-07-18 02:14:54,610 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=15, resume processing ppid=13 2023-07-18 02:14:54,610 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=15, ppid=13, state=SUCCESS; CloseRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,39557,1689646489998 in 444 msec 2023-07-18 02:14:54,611 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=13, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,43645,1689646493716; forceNewPlan=false, retain=false 2023-07-18 02:14:54,762 INFO [jenkins-hbase4:40909] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-18 02:14:54,762 INFO [PEWorker-2] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,43645,1689646493716, state=OPENING 2023-07-18 02:14:54,764 DEBUG [Listener at localhost/38101-EventThread] zookeeper.ZKWatcher(600): master:40909-0x1017635d76e0000, quorum=127.0.0.1:54439, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-18 02:14:54,764 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=16, ppid=13, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,43645,1689646493716}] 2023-07-18 02:14:54,764 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-18 02:14:54,918 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,43645,1689646493716 2023-07-18 02:14:54,918 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-18 02:14:54,921 INFO [RS-EventLoopGroup-7-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:42420, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-18 02:14:54,927 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-18 02:14:54,927 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 02:14:54,930 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C43645%2C1689646493716.meta, suffix=.meta, logDir=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/WALs/jenkins-hbase4.apache.org,43645,1689646493716, archiveDir=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/oldWALs, maxLogs=32 2023-07-18 02:14:54,953 DEBUG [RS-EventLoopGroup-7-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34885,DS-afa6b23c-0172-447d-8546-c0b8f662d95b,DISK] 2023-07-18 02:14:54,954 DEBUG [RS-EventLoopGroup-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38365,DS-9de188ed-4aa0-40e3-be2d-fc8641659521,DISK] 2023-07-18 02:14:54,957 DEBUG [RS-EventLoopGroup-7-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33339,DS-bef9494d-281f-4e87-b04c-fe86fdcfb4dc,DISK] 2023-07-18 02:14:54,960 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/WALs/jenkins-hbase4.apache.org,43645,1689646493716/jenkins-hbase4.apache.org%2C43645%2C1689646493716.meta.1689646494931.meta 2023-07-18 02:14:54,960 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34885,DS-afa6b23c-0172-447d-8546-c0b8f662d95b,DISK], DatanodeInfoWithStorage[127.0.0.1:38365,DS-9de188ed-4aa0-40e3-be2d-fc8641659521,DISK], DatanodeInfoWithStorage[127.0.0.1:33339,DS-bef9494d-281f-4e87-b04c-fe86fdcfb4dc,DISK]] 2023-07-18 02:14:54,960 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-18 02:14:54,960 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-18 02:14:54,961 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-18 02:14:54,961 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-18 02:14:54,961 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-18 02:14:54,961 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 02:14:54,961 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-18 02:14:54,961 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-18 02:14:54,964 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-18 02:14:54,966 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/meta/1588230740/info 2023-07-18 02:14:54,966 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/meta/1588230740/info 2023-07-18 02:14:54,967 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-18 02:14:54,981 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/meta/1588230740/info/f6975690ee324060b18de846d256e046 2023-07-18 02:14:54,982 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 02:14:54,982 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-18 02:14:54,984 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/meta/1588230740/rep_barrier 2023-07-18 02:14:54,984 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/meta/1588230740/rep_barrier 2023-07-18 02:14:54,985 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-18 02:14:54,985 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 02:14:54,986 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-18 02:14:54,987 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/meta/1588230740/table 2023-07-18 02:14:54,987 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/meta/1588230740/table 2023-07-18 02:14:54,988 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-18 02:14:55,004 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/meta/1588230740/table/8aaa79afa8164b0582eb69bd2cec2d06 2023-07-18 02:14:55,005 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 02:14:55,006 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/meta/1588230740 2023-07-18 02:14:55,009 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/meta/1588230740 2023-07-18 02:14:55,013 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-18 02:14:55,015 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-18 02:14:55,017 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=19; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11112437120, jitterRate=0.03492635488510132}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-18 02:14:55,017 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-18 02:14:55,019 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=16, masterSystemTime=1689646494918 2023-07-18 02:14:55,025 INFO [PEWorker-1] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,43645,1689646493716, state=OPEN 2023-07-18 02:14:55,026 DEBUG [Listener at localhost/38101-EventThread] zookeeper.ZKWatcher(600): master:40909-0x1017635d76e0000, quorum=127.0.0.1:54439, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-18 02:14:55,026 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-18 02:14:55,026 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-18 02:14:55,027 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-18 02:14:55,032 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=16, resume processing ppid=13 2023-07-18 02:14:55,032 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=16, ppid=13, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,43645,1689646493716 in 263 msec 2023-07-18 02:14:55,034 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=13, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE in 875 msec 2023-07-18 02:14:55,160 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] procedure.ProcedureSyncWait(216): waitFor pid=12 2023-07-18 02:14:55,181 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close fbc284aeb66f3eaca0bb2d67e73a56a3 2023-07-18 02:14:55,182 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing fbc284aeb66f3eaca0bb2d67e73a56a3, disabling compactions & flushes 2023-07-18 02:14:55,182 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689646492720.fbc284aeb66f3eaca0bb2d67e73a56a3. 2023-07-18 02:14:55,183 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689646492720.fbc284aeb66f3eaca0bb2d67e73a56a3. 2023-07-18 02:14:55,183 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689646492720.fbc284aeb66f3eaca0bb2d67e73a56a3. after waiting 0 ms 2023-07-18 02:14:55,183 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689646492720.fbc284aeb66f3eaca0bb2d67e73a56a3. 2023-07-18 02:14:55,183 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing fbc284aeb66f3eaca0bb2d67e73a56a3 1/1 column families, dataSize=78 B heapSize=488 B 2023-07-18 02:14:55,228 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/namespace/fbc284aeb66f3eaca0bb2d67e73a56a3/.tmp/info/9b08322805ff412fa8b15e0d8d41867f 2023-07-18 02:14:55,244 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/namespace/fbc284aeb66f3eaca0bb2d67e73a56a3/.tmp/info/9b08322805ff412fa8b15e0d8d41867f as hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/namespace/fbc284aeb66f3eaca0bb2d67e73a56a3/info/9b08322805ff412fa8b15e0d8d41867f 2023-07-18 02:14:55,262 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/namespace/fbc284aeb66f3eaca0bb2d67e73a56a3/info/9b08322805ff412fa8b15e0d8d41867f, entries=2, sequenceid=6, filesize=4.8 K 2023-07-18 02:14:55,265 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for fbc284aeb66f3eaca0bb2d67e73a56a3 in 82ms, sequenceid=6, compaction requested=false 2023-07-18 02:14:55,265 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-18 02:14:55,272 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/namespace/fbc284aeb66f3eaca0bb2d67e73a56a3/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-07-18 02:14:55,273 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689646492720.fbc284aeb66f3eaca0bb2d67e73a56a3. 2023-07-18 02:14:55,273 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for fbc284aeb66f3eaca0bb2d67e73a56a3: 2023-07-18 02:14:55,273 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding fbc284aeb66f3eaca0bb2d67e73a56a3 move to jenkins-hbase4.apache.org,43645,1689646493716 record at close sequenceid=6 2023-07-18 02:14:55,276 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed fbc284aeb66f3eaca0bb2d67e73a56a3 2023-07-18 02:14:55,276 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=fbc284aeb66f3eaca0bb2d67e73a56a3, regionState=CLOSED 2023-07-18 02:14:55,277 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:namespace,,1689646492720.fbc284aeb66f3eaca0bb2d67e73a56a3.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689646495276"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689646495276"}]},"ts":"1689646495276"} 2023-07-18 02:14:55,278 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=39557] ipc.CallRunner(144): callId: 41 service: ClientService methodName: Mutate size: 217 connection: 172.31.14.131:59100 deadline: 1689646555277, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=43645 startCode=1689646493716. As of locationSeqNum=15. 2023-07-18 02:14:55,379 DEBUG [PEWorker-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-18 02:14:55,381 INFO [RS-EventLoopGroup-7-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:42424, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-18 02:14:55,388 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=12 2023-07-18 02:14:55,388 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=12, state=SUCCESS; CloseRegionProcedure fbc284aeb66f3eaca0bb2d67e73a56a3, server=jenkins-hbase4.apache.org,35063,1689646489808 in 1.2210 sec 2023-07-18 02:14:55,389 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=fbc284aeb66f3eaca0bb2d67e73a56a3, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,43645,1689646493716; forceNewPlan=false, retain=false 2023-07-18 02:14:55,539 INFO [jenkins-hbase4:40909] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-18 02:14:55,539 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=fbc284aeb66f3eaca0bb2d67e73a56a3, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43645,1689646493716 2023-07-18 02:14:55,540 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689646492720.fbc284aeb66f3eaca0bb2d67e73a56a3.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689646495539"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689646495539"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689646495539"}]},"ts":"1689646495539"} 2023-07-18 02:14:55,543 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=12, state=RUNNABLE; OpenRegionProcedure fbc284aeb66f3eaca0bb2d67e73a56a3, server=jenkins-hbase4.apache.org,43645,1689646493716}] 2023-07-18 02:14:55,701 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689646492720.fbc284aeb66f3eaca0bb2d67e73a56a3. 2023-07-18 02:14:55,701 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => fbc284aeb66f3eaca0bb2d67e73a56a3, NAME => 'hbase:namespace,,1689646492720.fbc284aeb66f3eaca0bb2d67e73a56a3.', STARTKEY => '', ENDKEY => ''} 2023-07-18 02:14:55,702 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace fbc284aeb66f3eaca0bb2d67e73a56a3 2023-07-18 02:14:55,702 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689646492720.fbc284aeb66f3eaca0bb2d67e73a56a3.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 02:14:55,702 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for fbc284aeb66f3eaca0bb2d67e73a56a3 2023-07-18 02:14:55,702 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for fbc284aeb66f3eaca0bb2d67e73a56a3 2023-07-18 02:14:55,704 INFO [StoreOpener-fbc284aeb66f3eaca0bb2d67e73a56a3-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region fbc284aeb66f3eaca0bb2d67e73a56a3 2023-07-18 02:14:55,706 DEBUG [StoreOpener-fbc284aeb66f3eaca0bb2d67e73a56a3-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/namespace/fbc284aeb66f3eaca0bb2d67e73a56a3/info 2023-07-18 02:14:55,706 DEBUG [StoreOpener-fbc284aeb66f3eaca0bb2d67e73a56a3-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/namespace/fbc284aeb66f3eaca0bb2d67e73a56a3/info 2023-07-18 02:14:55,707 INFO [StoreOpener-fbc284aeb66f3eaca0bb2d67e73a56a3-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region fbc284aeb66f3eaca0bb2d67e73a56a3 columnFamilyName info 2023-07-18 02:14:55,717 DEBUG [StoreOpener-fbc284aeb66f3eaca0bb2d67e73a56a3-1] regionserver.HStore(539): loaded hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/namespace/fbc284aeb66f3eaca0bb2d67e73a56a3/info/9b08322805ff412fa8b15e0d8d41867f 2023-07-18 02:14:55,718 INFO [StoreOpener-fbc284aeb66f3eaca0bb2d67e73a56a3-1] regionserver.HStore(310): Store=fbc284aeb66f3eaca0bb2d67e73a56a3/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 02:14:55,720 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/namespace/fbc284aeb66f3eaca0bb2d67e73a56a3 2023-07-18 02:14:55,725 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/namespace/fbc284aeb66f3eaca0bb2d67e73a56a3 2023-07-18 02:14:55,730 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for fbc284aeb66f3eaca0bb2d67e73a56a3 2023-07-18 02:14:55,732 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened fbc284aeb66f3eaca0bb2d67e73a56a3; next sequenceid=10; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10898964000, jitterRate=0.015045121312141418}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 02:14:55,732 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for fbc284aeb66f3eaca0bb2d67e73a56a3: 2023-07-18 02:14:55,734 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689646492720.fbc284aeb66f3eaca0bb2d67e73a56a3., pid=17, masterSystemTime=1689646495696 2023-07-18 02:14:55,736 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689646492720.fbc284aeb66f3eaca0bb2d67e73a56a3. 2023-07-18 02:14:55,736 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689646492720.fbc284aeb66f3eaca0bb2d67e73a56a3. 2023-07-18 02:14:55,738 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=fbc284aeb66f3eaca0bb2d67e73a56a3, regionState=OPEN, openSeqNum=10, regionLocation=jenkins-hbase4.apache.org,43645,1689646493716 2023-07-18 02:14:55,738 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689646492720.fbc284aeb66f3eaca0bb2d67e73a56a3.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689646495738"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689646495738"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689646495738"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689646495738"}]},"ts":"1689646495738"} 2023-07-18 02:14:55,748 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=12 2023-07-18 02:14:55,748 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=12, state=SUCCESS; OpenRegionProcedure fbc284aeb66f3eaca0bb2d67e73a56a3, server=jenkins-hbase4.apache.org,43645,1689646493716 in 199 msec 2023-07-18 02:14:55,750 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=fbc284aeb66f3eaca0bb2d67e73a56a3, REOPEN/MOVE in 1.5940 sec 2023-07-18 02:14:56,160 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,35063,1689646489808, jenkins-hbase4.apache.org,39557,1689646489998] are moved back to default 2023-07-18 02:14:56,160 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testTableMoveTruncateAndDrop_1141100661 2023-07-18 02:14:56,161 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 02:14:56,165 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:14:56,165 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:14:56,168 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_1141100661 2023-07-18 02:14:56,168 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 02:14:56,178 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 02:14:56,179 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] procedure2.ProcedureExecutor(1029): Stored pid=18, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-18 02:14:56,181 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=18, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 02:14:56,186 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testTableMoveTruncateAndDrop" procId is: 18 2023-07-18 02:14:56,186 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:14:56,187 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1141100661 2023-07-18 02:14:56,187 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 02:14:56,188 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 02:14:56,195 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(1230): Checking to see if procedure is done pid=18 2023-07-18 02:14:56,197 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=18, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-18 02:14:56,207 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9081579ad90c011736a6a20282632a80 2023-07-18 02:14:56,207 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/5b31c79b0c2dd00c2c5b23efa1c80b14 2023-07-18 02:14:56,207 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/55df0e4f2ce9a9ca3676c096f6b5defe 2023-07-18 02:14:56,207 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b8f9fa9f57d04072c7900a18782ec9b9 2023-07-18 02:14:56,207 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d26a05047c700cd40a14b5289e5087f2 2023-07-18 02:14:56,208 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/5b31c79b0c2dd00c2c5b23efa1c80b14 empty. 2023-07-18 02:14:56,208 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b8f9fa9f57d04072c7900a18782ec9b9 empty. 2023-07-18 02:14:56,208 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/55df0e4f2ce9a9ca3676c096f6b5defe empty. 2023-07-18 02:14:56,209 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9081579ad90c011736a6a20282632a80 empty. 2023-07-18 02:14:56,209 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b8f9fa9f57d04072c7900a18782ec9b9 2023-07-18 02:14:56,209 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/5b31c79b0c2dd00c2c5b23efa1c80b14 2023-07-18 02:14:56,209 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9081579ad90c011736a6a20282632a80 2023-07-18 02:14:56,209 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d26a05047c700cd40a14b5289e5087f2 empty. 2023-07-18 02:14:56,209 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/55df0e4f2ce9a9ca3676c096f6b5defe 2023-07-18 02:14:56,212 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d26a05047c700cd40a14b5289e5087f2 2023-07-18 02:14:56,212 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-18 02:14:56,237 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/.tabledesc/.tableinfo.0000000001 2023-07-18 02:14:56,239 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => 55df0e4f2ce9a9ca3676c096f6b5defe, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689646496175.55df0e4f2ce9a9ca3676c096f6b5defe.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp 2023-07-18 02:14:56,239 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 9081579ad90c011736a6a20282632a80, NAME => 'Group_testTableMoveTruncateAndDrop,,1689646496175.9081579ad90c011736a6a20282632a80.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp 2023-07-18 02:14:56,239 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 5b31c79b0c2dd00c2c5b23efa1c80b14, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689646496175.5b31c79b0c2dd00c2c5b23efa1c80b14.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp 2023-07-18 02:14:56,284 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689646496175.9081579ad90c011736a6a20282632a80.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 02:14:56,285 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 9081579ad90c011736a6a20282632a80, disabling compactions & flushes 2023-07-18 02:14:56,285 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689646496175.9081579ad90c011736a6a20282632a80. 2023-07-18 02:14:56,285 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689646496175.9081579ad90c011736a6a20282632a80. 2023-07-18 02:14:56,285 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689646496175.9081579ad90c011736a6a20282632a80. after waiting 0 ms 2023-07-18 02:14:56,285 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689646496175.9081579ad90c011736a6a20282632a80. 2023-07-18 02:14:56,286 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689646496175.9081579ad90c011736a6a20282632a80. 2023-07-18 02:14:56,286 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 9081579ad90c011736a6a20282632a80: 2023-07-18 02:14:56,289 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => b8f9fa9f57d04072c7900a18782ec9b9, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689646496175.b8f9fa9f57d04072c7900a18782ec9b9.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp 2023-07-18 02:14:56,295 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689646496175.55df0e4f2ce9a9ca3676c096f6b5defe.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 02:14:56,296 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing 55df0e4f2ce9a9ca3676c096f6b5defe, disabling compactions & flushes 2023-07-18 02:14:56,296 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689646496175.55df0e4f2ce9a9ca3676c096f6b5defe. 2023-07-18 02:14:56,296 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689646496175.55df0e4f2ce9a9ca3676c096f6b5defe. 2023-07-18 02:14:56,296 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689646496175.55df0e4f2ce9a9ca3676c096f6b5defe. after waiting 0 ms 2023-07-18 02:14:56,296 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689646496175.55df0e4f2ce9a9ca3676c096f6b5defe. 2023-07-18 02:14:56,296 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689646496175.55df0e4f2ce9a9ca3676c096f6b5defe. 2023-07-18 02:14:56,296 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for 55df0e4f2ce9a9ca3676c096f6b5defe: 2023-07-18 02:14:56,297 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => d26a05047c700cd40a14b5289e5087f2, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689646496175.d26a05047c700cd40a14b5289e5087f2.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp 2023-07-18 02:14:56,298 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689646496175.5b31c79b0c2dd00c2c5b23efa1c80b14.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 02:14:56,298 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 5b31c79b0c2dd00c2c5b23efa1c80b14, disabling compactions & flushes 2023-07-18 02:14:56,298 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689646496175.5b31c79b0c2dd00c2c5b23efa1c80b14. 2023-07-18 02:14:56,298 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689646496175.5b31c79b0c2dd00c2c5b23efa1c80b14. 2023-07-18 02:14:56,299 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689646496175.5b31c79b0c2dd00c2c5b23efa1c80b14. after waiting 0 ms 2023-07-18 02:14:56,299 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689646496175.5b31c79b0c2dd00c2c5b23efa1c80b14. 2023-07-18 02:14:56,299 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689646496175.5b31c79b0c2dd00c2c5b23efa1c80b14. 2023-07-18 02:14:56,299 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 5b31c79b0c2dd00c2c5b23efa1c80b14: 2023-07-18 02:14:56,301 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(1230): Checking to see if procedure is done pid=18 2023-07-18 02:14:56,315 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689646496175.b8f9fa9f57d04072c7900a18782ec9b9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 02:14:56,315 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing b8f9fa9f57d04072c7900a18782ec9b9, disabling compactions & flushes 2023-07-18 02:14:56,315 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689646496175.d26a05047c700cd40a14b5289e5087f2.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 02:14:56,315 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689646496175.b8f9fa9f57d04072c7900a18782ec9b9. 2023-07-18 02:14:56,315 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing d26a05047c700cd40a14b5289e5087f2, disabling compactions & flushes 2023-07-18 02:14:56,315 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689646496175.b8f9fa9f57d04072c7900a18782ec9b9. 2023-07-18 02:14:56,315 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689646496175.d26a05047c700cd40a14b5289e5087f2. 2023-07-18 02:14:56,316 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689646496175.b8f9fa9f57d04072c7900a18782ec9b9. after waiting 0 ms 2023-07-18 02:14:56,316 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689646496175.b8f9fa9f57d04072c7900a18782ec9b9. 2023-07-18 02:14:56,316 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689646496175.d26a05047c700cd40a14b5289e5087f2. 2023-07-18 02:14:56,316 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689646496175.b8f9fa9f57d04072c7900a18782ec9b9. 2023-07-18 02:14:56,316 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689646496175.d26a05047c700cd40a14b5289e5087f2. after waiting 0 ms 2023-07-18 02:14:56,316 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for b8f9fa9f57d04072c7900a18782ec9b9: 2023-07-18 02:14:56,316 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689646496175.d26a05047c700cd40a14b5289e5087f2. 2023-07-18 02:14:56,318 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689646496175.d26a05047c700cd40a14b5289e5087f2. 2023-07-18 02:14:56,318 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for d26a05047c700cd40a14b5289e5087f2: 2023-07-18 02:14:56,322 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=18, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_ADD_TO_META 2023-07-18 02:14:56,323 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689646496175.9081579ad90c011736a6a20282632a80.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689646496323"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689646496323"}]},"ts":"1689646496323"} 2023-07-18 02:14:56,324 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689646496175.55df0e4f2ce9a9ca3676c096f6b5defe.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689646496323"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689646496323"}]},"ts":"1689646496323"} 2023-07-18 02:14:56,324 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689646496175.5b31c79b0c2dd00c2c5b23efa1c80b14.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689646496323"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689646496323"}]},"ts":"1689646496323"} 2023-07-18 02:14:56,324 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689646496175.b8f9fa9f57d04072c7900a18782ec9b9.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689646496323"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689646496323"}]},"ts":"1689646496323"} 2023-07-18 02:14:56,324 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689646496175.d26a05047c700cd40a14b5289e5087f2.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689646496323"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689646496323"}]},"ts":"1689646496323"} 2023-07-18 02:14:56,379 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-18 02:14:56,381 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=18, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-18 02:14:56,381 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689646496381"}]},"ts":"1689646496381"} 2023-07-18 02:14:56,384 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLING in hbase:meta 2023-07-18 02:14:56,389 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 02:14:56,390 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 02:14:56,390 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 02:14:56,390 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 02:14:56,390 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=19, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9081579ad90c011736a6a20282632a80, ASSIGN}, {pid=20, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5b31c79b0c2dd00c2c5b23efa1c80b14, ASSIGN}, {pid=21, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=55df0e4f2ce9a9ca3676c096f6b5defe, ASSIGN}, {pid=22, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b8f9fa9f57d04072c7900a18782ec9b9, ASSIGN}, {pid=23, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d26a05047c700cd40a14b5289e5087f2, ASSIGN}] 2023-07-18 02:14:56,393 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=19, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9081579ad90c011736a6a20282632a80, ASSIGN 2023-07-18 02:14:56,394 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=21, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=55df0e4f2ce9a9ca3676c096f6b5defe, ASSIGN 2023-07-18 02:14:56,395 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=23, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d26a05047c700cd40a14b5289e5087f2, ASSIGN 2023-07-18 02:14:56,395 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=20, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5b31c79b0c2dd00c2c5b23efa1c80b14, ASSIGN 2023-07-18 02:14:56,396 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=19, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9081579ad90c011736a6a20282632a80, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43645,1689646493716; forceNewPlan=false, retain=false 2023-07-18 02:14:56,399 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=22, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b8f9fa9f57d04072c7900a18782ec9b9, ASSIGN 2023-07-18 02:14:56,399 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=21, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=55df0e4f2ce9a9ca3676c096f6b5defe, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,45077,1689646489555; forceNewPlan=false, retain=false 2023-07-18 02:14:56,399 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=23, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d26a05047c700cd40a14b5289e5087f2, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43645,1689646493716; forceNewPlan=false, retain=false 2023-07-18 02:14:56,399 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=20, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5b31c79b0c2dd00c2c5b23efa1c80b14, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43645,1689646493716; forceNewPlan=false, retain=false 2023-07-18 02:14:56,400 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=22, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b8f9fa9f57d04072c7900a18782ec9b9, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,45077,1689646489555; forceNewPlan=false, retain=false 2023-07-18 02:14:56,505 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(1230): Checking to see if procedure is done pid=18 2023-07-18 02:14:56,546 INFO [jenkins-hbase4:40909] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-18 02:14:56,550 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=20 updating hbase:meta row=5b31c79b0c2dd00c2c5b23efa1c80b14, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43645,1689646493716 2023-07-18 02:14:56,550 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=22 updating hbase:meta row=b8f9fa9f57d04072c7900a18782ec9b9, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45077,1689646489555 2023-07-18 02:14:56,550 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=55df0e4f2ce9a9ca3676c096f6b5defe, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45077,1689646489555 2023-07-18 02:14:56,550 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689646496175.5b31c79b0c2dd00c2c5b23efa1c80b14.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689646496550"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689646496550"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689646496550"}]},"ts":"1689646496550"} 2023-07-18 02:14:56,550 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=19 updating hbase:meta row=9081579ad90c011736a6a20282632a80, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43645,1689646493716 2023-07-18 02:14:56,550 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=23 updating hbase:meta row=d26a05047c700cd40a14b5289e5087f2, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43645,1689646493716 2023-07-18 02:14:56,550 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689646496175.b8f9fa9f57d04072c7900a18782ec9b9.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689646496550"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689646496550"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689646496550"}]},"ts":"1689646496550"} 2023-07-18 02:14:56,550 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689646496175.55df0e4f2ce9a9ca3676c096f6b5defe.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689646496550"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689646496550"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689646496550"}]},"ts":"1689646496550"} 2023-07-18 02:14:56,550 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689646496175.d26a05047c700cd40a14b5289e5087f2.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689646496550"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689646496550"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689646496550"}]},"ts":"1689646496550"} 2023-07-18 02:14:56,550 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689646496175.9081579ad90c011736a6a20282632a80.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689646496550"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689646496550"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689646496550"}]},"ts":"1689646496550"} 2023-07-18 02:14:56,553 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=24, ppid=20, state=RUNNABLE; OpenRegionProcedure 5b31c79b0c2dd00c2c5b23efa1c80b14, server=jenkins-hbase4.apache.org,43645,1689646493716}] 2023-07-18 02:14:56,554 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=25, ppid=21, state=RUNNABLE; OpenRegionProcedure 55df0e4f2ce9a9ca3676c096f6b5defe, server=jenkins-hbase4.apache.org,45077,1689646489555}] 2023-07-18 02:14:56,555 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=26, ppid=22, state=RUNNABLE; OpenRegionProcedure b8f9fa9f57d04072c7900a18782ec9b9, server=jenkins-hbase4.apache.org,45077,1689646489555}] 2023-07-18 02:14:56,557 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=27, ppid=23, state=RUNNABLE; OpenRegionProcedure d26a05047c700cd40a14b5289e5087f2, server=jenkins-hbase4.apache.org,43645,1689646493716}] 2023-07-18 02:14:56,557 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=28, ppid=19, state=RUNNABLE; OpenRegionProcedure 9081579ad90c011736a6a20282632a80, server=jenkins-hbase4.apache.org,43645,1689646493716}] 2023-07-18 02:14:56,711 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689646496175.d26a05047c700cd40a14b5289e5087f2. 2023-07-18 02:14:56,711 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => d26a05047c700cd40a14b5289e5087f2, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689646496175.d26a05047c700cd40a14b5289e5087f2.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-18 02:14:56,712 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop d26a05047c700cd40a14b5289e5087f2 2023-07-18 02:14:56,713 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689646496175.d26a05047c700cd40a14b5289e5087f2.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 02:14:56,713 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for d26a05047c700cd40a14b5289e5087f2 2023-07-18 02:14:56,713 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for d26a05047c700cd40a14b5289e5087f2 2023-07-18 02:14:56,719 INFO [StoreOpener-d26a05047c700cd40a14b5289e5087f2-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region d26a05047c700cd40a14b5289e5087f2 2023-07-18 02:14:56,719 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689646496175.b8f9fa9f57d04072c7900a18782ec9b9. 2023-07-18 02:14:56,719 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b8f9fa9f57d04072c7900a18782ec9b9, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689646496175.b8f9fa9f57d04072c7900a18782ec9b9.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-18 02:14:56,720 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop b8f9fa9f57d04072c7900a18782ec9b9 2023-07-18 02:14:56,720 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689646496175.b8f9fa9f57d04072c7900a18782ec9b9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 02:14:56,720 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for b8f9fa9f57d04072c7900a18782ec9b9 2023-07-18 02:14:56,720 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for b8f9fa9f57d04072c7900a18782ec9b9 2023-07-18 02:14:56,722 INFO [StoreOpener-b8f9fa9f57d04072c7900a18782ec9b9-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region b8f9fa9f57d04072c7900a18782ec9b9 2023-07-18 02:14:56,724 DEBUG [StoreOpener-b8f9fa9f57d04072c7900a18782ec9b9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testTableMoveTruncateAndDrop/b8f9fa9f57d04072c7900a18782ec9b9/f 2023-07-18 02:14:56,724 DEBUG [StoreOpener-b8f9fa9f57d04072c7900a18782ec9b9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testTableMoveTruncateAndDrop/b8f9fa9f57d04072c7900a18782ec9b9/f 2023-07-18 02:14:56,725 INFO [StoreOpener-b8f9fa9f57d04072c7900a18782ec9b9-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b8f9fa9f57d04072c7900a18782ec9b9 columnFamilyName f 2023-07-18 02:14:56,725 DEBUG [StoreOpener-d26a05047c700cd40a14b5289e5087f2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testTableMoveTruncateAndDrop/d26a05047c700cd40a14b5289e5087f2/f 2023-07-18 02:14:56,725 DEBUG [StoreOpener-d26a05047c700cd40a14b5289e5087f2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testTableMoveTruncateAndDrop/d26a05047c700cd40a14b5289e5087f2/f 2023-07-18 02:14:56,725 INFO [StoreOpener-b8f9fa9f57d04072c7900a18782ec9b9-1] regionserver.HStore(310): Store=b8f9fa9f57d04072c7900a18782ec9b9/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 02:14:56,726 INFO [StoreOpener-d26a05047c700cd40a14b5289e5087f2-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region d26a05047c700cd40a14b5289e5087f2 columnFamilyName f 2023-07-18 02:14:56,727 INFO [StoreOpener-d26a05047c700cd40a14b5289e5087f2-1] regionserver.HStore(310): Store=d26a05047c700cd40a14b5289e5087f2/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 02:14:56,727 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testTableMoveTruncateAndDrop/b8f9fa9f57d04072c7900a18782ec9b9 2023-07-18 02:14:56,729 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testTableMoveTruncateAndDrop/d26a05047c700cd40a14b5289e5087f2 2023-07-18 02:14:56,730 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testTableMoveTruncateAndDrop/b8f9fa9f57d04072c7900a18782ec9b9 2023-07-18 02:14:56,730 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testTableMoveTruncateAndDrop/d26a05047c700cd40a14b5289e5087f2 2023-07-18 02:14:56,734 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for b8f9fa9f57d04072c7900a18782ec9b9 2023-07-18 02:14:56,734 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for d26a05047c700cd40a14b5289e5087f2 2023-07-18 02:14:56,737 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testTableMoveTruncateAndDrop/b8f9fa9f57d04072c7900a18782ec9b9/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 02:14:56,737 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testTableMoveTruncateAndDrop/d26a05047c700cd40a14b5289e5087f2/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 02:14:56,738 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened b8f9fa9f57d04072c7900a18782ec9b9; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9765770720, jitterRate=-0.09049172699451447}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 02:14:56,738 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened d26a05047c700cd40a14b5289e5087f2; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11956978400, jitterRate=0.11358039081096649}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 02:14:56,738 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for b8f9fa9f57d04072c7900a18782ec9b9: 2023-07-18 02:14:56,738 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for d26a05047c700cd40a14b5289e5087f2: 2023-07-18 02:14:56,739 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689646496175.b8f9fa9f57d04072c7900a18782ec9b9., pid=26, masterSystemTime=1689646496710 2023-07-18 02:14:56,740 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689646496175.d26a05047c700cd40a14b5289e5087f2., pid=27, masterSystemTime=1689646496706 2023-07-18 02:14:56,741 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689646496175.b8f9fa9f57d04072c7900a18782ec9b9. 2023-07-18 02:14:56,742 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689646496175.b8f9fa9f57d04072c7900a18782ec9b9. 2023-07-18 02:14:56,742 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689646496175.55df0e4f2ce9a9ca3676c096f6b5defe. 2023-07-18 02:14:56,742 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 55df0e4f2ce9a9ca3676c096f6b5defe, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689646496175.55df0e4f2ce9a9ca3676c096f6b5defe.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-18 02:14:56,742 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 55df0e4f2ce9a9ca3676c096f6b5defe 2023-07-18 02:14:56,743 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689646496175.55df0e4f2ce9a9ca3676c096f6b5defe.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 02:14:56,743 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 55df0e4f2ce9a9ca3676c096f6b5defe 2023-07-18 02:14:56,743 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 55df0e4f2ce9a9ca3676c096f6b5defe 2023-07-18 02:14:56,743 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=22 updating hbase:meta row=b8f9fa9f57d04072c7900a18782ec9b9, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,45077,1689646489555 2023-07-18 02:14:56,743 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689646496175.b8f9fa9f57d04072c7900a18782ec9b9.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689646496743"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689646496743"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689646496743"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689646496743"}]},"ts":"1689646496743"} 2023-07-18 02:14:56,743 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689646496175.d26a05047c700cd40a14b5289e5087f2. 2023-07-18 02:14:56,744 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689646496175.d26a05047c700cd40a14b5289e5087f2. 2023-07-18 02:14:56,744 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689646496175.9081579ad90c011736a6a20282632a80. 2023-07-18 02:14:56,744 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 9081579ad90c011736a6a20282632a80, NAME => 'Group_testTableMoveTruncateAndDrop,,1689646496175.9081579ad90c011736a6a20282632a80.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-18 02:14:56,744 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 9081579ad90c011736a6a20282632a80 2023-07-18 02:14:56,744 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689646496175.9081579ad90c011736a6a20282632a80.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 02:14:56,744 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=23 updating hbase:meta row=d26a05047c700cd40a14b5289e5087f2, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43645,1689646493716 2023-07-18 02:14:56,744 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 9081579ad90c011736a6a20282632a80 2023-07-18 02:14:56,745 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 9081579ad90c011736a6a20282632a80 2023-07-18 02:14:56,745 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689646496175.d26a05047c700cd40a14b5289e5087f2.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689646496744"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689646496744"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689646496744"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689646496744"}]},"ts":"1689646496744"} 2023-07-18 02:14:56,747 INFO [StoreOpener-9081579ad90c011736a6a20282632a80-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 9081579ad90c011736a6a20282632a80 2023-07-18 02:14:56,750 DEBUG [StoreOpener-9081579ad90c011736a6a20282632a80-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testTableMoveTruncateAndDrop/9081579ad90c011736a6a20282632a80/f 2023-07-18 02:14:56,750 DEBUG [StoreOpener-9081579ad90c011736a6a20282632a80-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testTableMoveTruncateAndDrop/9081579ad90c011736a6a20282632a80/f 2023-07-18 02:14:56,751 INFO [StoreOpener-55df0e4f2ce9a9ca3676c096f6b5defe-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 55df0e4f2ce9a9ca3676c096f6b5defe 2023-07-18 02:14:56,753 INFO [StoreOpener-9081579ad90c011736a6a20282632a80-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 9081579ad90c011736a6a20282632a80 columnFamilyName f 2023-07-18 02:14:56,755 INFO [StoreOpener-9081579ad90c011736a6a20282632a80-1] regionserver.HStore(310): Store=9081579ad90c011736a6a20282632a80/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 02:14:56,755 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=26, resume processing ppid=22 2023-07-18 02:14:56,756 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=26, ppid=22, state=SUCCESS; OpenRegionProcedure b8f9fa9f57d04072c7900a18782ec9b9, server=jenkins-hbase4.apache.org,45077,1689646489555 in 192 msec 2023-07-18 02:14:56,757 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testTableMoveTruncateAndDrop/9081579ad90c011736a6a20282632a80 2023-07-18 02:14:56,758 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testTableMoveTruncateAndDrop/9081579ad90c011736a6a20282632a80 2023-07-18 02:14:56,758 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=27, resume processing ppid=23 2023-07-18 02:14:56,758 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=27, ppid=23, state=SUCCESS; OpenRegionProcedure d26a05047c700cd40a14b5289e5087f2, server=jenkins-hbase4.apache.org,43645,1689646493716 in 191 msec 2023-07-18 02:14:56,759 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=22, ppid=18, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b8f9fa9f57d04072c7900a18782ec9b9, ASSIGN in 366 msec 2023-07-18 02:14:56,760 DEBUG [StoreOpener-55df0e4f2ce9a9ca3676c096f6b5defe-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testTableMoveTruncateAndDrop/55df0e4f2ce9a9ca3676c096f6b5defe/f 2023-07-18 02:14:56,760 DEBUG [StoreOpener-55df0e4f2ce9a9ca3676c096f6b5defe-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testTableMoveTruncateAndDrop/55df0e4f2ce9a9ca3676c096f6b5defe/f 2023-07-18 02:14:56,760 INFO [StoreOpener-55df0e4f2ce9a9ca3676c096f6b5defe-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 55df0e4f2ce9a9ca3676c096f6b5defe columnFamilyName f 2023-07-18 02:14:56,761 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=23, ppid=18, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d26a05047c700cd40a14b5289e5087f2, ASSIGN in 368 msec 2023-07-18 02:14:56,762 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 9081579ad90c011736a6a20282632a80 2023-07-18 02:14:56,763 INFO [StoreOpener-55df0e4f2ce9a9ca3676c096f6b5defe-1] regionserver.HStore(310): Store=55df0e4f2ce9a9ca3676c096f6b5defe/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 02:14:56,764 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testTableMoveTruncateAndDrop/55df0e4f2ce9a9ca3676c096f6b5defe 2023-07-18 02:14:56,765 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testTableMoveTruncateAndDrop/55df0e4f2ce9a9ca3676c096f6b5defe 2023-07-18 02:14:56,769 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 55df0e4f2ce9a9ca3676c096f6b5defe 2023-07-18 02:14:56,773 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testTableMoveTruncateAndDrop/55df0e4f2ce9a9ca3676c096f6b5defe/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 02:14:56,774 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 55df0e4f2ce9a9ca3676c096f6b5defe; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9820675840, jitterRate=-0.08537828922271729}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 02:14:56,774 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 55df0e4f2ce9a9ca3676c096f6b5defe: 2023-07-18 02:14:56,775 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689646496175.55df0e4f2ce9a9ca3676c096f6b5defe., pid=25, masterSystemTime=1689646496710 2023-07-18 02:14:56,778 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689646496175.55df0e4f2ce9a9ca3676c096f6b5defe. 2023-07-18 02:14:56,778 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689646496175.55df0e4f2ce9a9ca3676c096f6b5defe. 2023-07-18 02:14:56,778 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=55df0e4f2ce9a9ca3676c096f6b5defe, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,45077,1689646489555 2023-07-18 02:14:56,779 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689646496175.55df0e4f2ce9a9ca3676c096f6b5defe.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689646496778"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689646496778"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689646496778"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689646496778"}]},"ts":"1689646496778"} 2023-07-18 02:14:56,785 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=25, resume processing ppid=21 2023-07-18 02:14:56,785 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=25, ppid=21, state=SUCCESS; OpenRegionProcedure 55df0e4f2ce9a9ca3676c096f6b5defe, server=jenkins-hbase4.apache.org,45077,1689646489555 in 227 msec 2023-07-18 02:14:56,785 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testTableMoveTruncateAndDrop/9081579ad90c011736a6a20282632a80/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 02:14:56,787 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 9081579ad90c011736a6a20282632a80; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10689650880, jitterRate=-0.004448682069778442}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 02:14:56,788 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 9081579ad90c011736a6a20282632a80: 2023-07-18 02:14:56,789 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689646496175.9081579ad90c011736a6a20282632a80., pid=28, masterSystemTime=1689646496706 2023-07-18 02:14:56,789 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=21, ppid=18, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=55df0e4f2ce9a9ca3676c096f6b5defe, ASSIGN in 395 msec 2023-07-18 02:14:56,791 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689646496175.9081579ad90c011736a6a20282632a80. 2023-07-18 02:14:56,792 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689646496175.9081579ad90c011736a6a20282632a80. 2023-07-18 02:14:56,792 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689646496175.5b31c79b0c2dd00c2c5b23efa1c80b14. 2023-07-18 02:14:56,792 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 5b31c79b0c2dd00c2c5b23efa1c80b14, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689646496175.5b31c79b0c2dd00c2c5b23efa1c80b14.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-18 02:14:56,792 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=19 updating hbase:meta row=9081579ad90c011736a6a20282632a80, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43645,1689646493716 2023-07-18 02:14:56,792 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 5b31c79b0c2dd00c2c5b23efa1c80b14 2023-07-18 02:14:56,792 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689646496175.5b31c79b0c2dd00c2c5b23efa1c80b14.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 02:14:56,792 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689646496175.9081579ad90c011736a6a20282632a80.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689646496792"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689646496792"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689646496792"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689646496792"}]},"ts":"1689646496792"} 2023-07-18 02:14:56,793 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 5b31c79b0c2dd00c2c5b23efa1c80b14 2023-07-18 02:14:56,794 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 5b31c79b0c2dd00c2c5b23efa1c80b14 2023-07-18 02:14:56,798 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=28, resume processing ppid=19 2023-07-18 02:14:56,804 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=28, ppid=19, state=SUCCESS; OpenRegionProcedure 9081579ad90c011736a6a20282632a80, server=jenkins-hbase4.apache.org,43645,1689646493716 in 238 msec 2023-07-18 02:14:56,804 INFO [StoreOpener-5b31c79b0c2dd00c2c5b23efa1c80b14-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 5b31c79b0c2dd00c2c5b23efa1c80b14 2023-07-18 02:14:56,800 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=19, ppid=18, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9081579ad90c011736a6a20282632a80, ASSIGN in 408 msec 2023-07-18 02:14:56,806 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(1230): Checking to see if procedure is done pid=18 2023-07-18 02:14:56,824 DEBUG [StoreOpener-5b31c79b0c2dd00c2c5b23efa1c80b14-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testTableMoveTruncateAndDrop/5b31c79b0c2dd00c2c5b23efa1c80b14/f 2023-07-18 02:14:56,824 DEBUG [StoreOpener-5b31c79b0c2dd00c2c5b23efa1c80b14-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testTableMoveTruncateAndDrop/5b31c79b0c2dd00c2c5b23efa1c80b14/f 2023-07-18 02:14:56,825 INFO [StoreOpener-5b31c79b0c2dd00c2c5b23efa1c80b14-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 5b31c79b0c2dd00c2c5b23efa1c80b14 columnFamilyName f 2023-07-18 02:14:56,826 INFO [StoreOpener-5b31c79b0c2dd00c2c5b23efa1c80b14-1] regionserver.HStore(310): Store=5b31c79b0c2dd00c2c5b23efa1c80b14/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 02:14:56,827 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testTableMoveTruncateAndDrop/5b31c79b0c2dd00c2c5b23efa1c80b14 2023-07-18 02:14:56,829 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testTableMoveTruncateAndDrop/5b31c79b0c2dd00c2c5b23efa1c80b14 2023-07-18 02:14:56,834 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 5b31c79b0c2dd00c2c5b23efa1c80b14 2023-07-18 02:14:56,839 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testTableMoveTruncateAndDrop/5b31c79b0c2dd00c2c5b23efa1c80b14/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 02:14:56,840 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 5b31c79b0c2dd00c2c5b23efa1c80b14; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10327298080, jitterRate=-0.0381954163312912}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 02:14:56,840 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 5b31c79b0c2dd00c2c5b23efa1c80b14: 2023-07-18 02:14:56,843 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689646496175.5b31c79b0c2dd00c2c5b23efa1c80b14., pid=24, masterSystemTime=1689646496706 2023-07-18 02:14:56,847 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689646496175.5b31c79b0c2dd00c2c5b23efa1c80b14. 2023-07-18 02:14:56,847 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689646496175.5b31c79b0c2dd00c2c5b23efa1c80b14. 2023-07-18 02:14:56,848 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=20 updating hbase:meta row=5b31c79b0c2dd00c2c5b23efa1c80b14, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43645,1689646493716 2023-07-18 02:14:56,848 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689646496175.5b31c79b0c2dd00c2c5b23efa1c80b14.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689646496848"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689646496848"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689646496848"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689646496848"}]},"ts":"1689646496848"} 2023-07-18 02:14:56,856 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=24, resume processing ppid=20 2023-07-18 02:14:56,856 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=24, ppid=20, state=SUCCESS; OpenRegionProcedure 5b31c79b0c2dd00c2c5b23efa1c80b14, server=jenkins-hbase4.apache.org,43645,1689646493716 in 299 msec 2023-07-18 02:14:56,863 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=20, resume processing ppid=18 2023-07-18 02:14:56,870 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=20, ppid=18, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5b31c79b0c2dd00c2c5b23efa1c80b14, ASSIGN in 466 msec 2023-07-18 02:14:56,871 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=18, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-18 02:14:56,871 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689646496871"}]},"ts":"1689646496871"} 2023-07-18 02:14:56,874 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLED in hbase:meta 2023-07-18 02:14:56,879 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=18, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_POST_OPERATION 2023-07-18 02:14:56,883 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=18, state=SUCCESS; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop in 702 msec 2023-07-18 02:14:57,308 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(1230): Checking to see if procedure is done pid=18 2023-07-18 02:14:57,309 INFO [Listener at localhost/38101] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 18 completed 2023-07-18 02:14:57,309 DEBUG [Listener at localhost/38101] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testTableMoveTruncateAndDrop get assigned. Timeout = 60000ms 2023-07-18 02:14:57,310 INFO [Listener at localhost/38101] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 02:14:57,311 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=39557] ipc.CallRunner(144): callId: 49 service: ClientService methodName: Scan size: 95 connection: 172.31.14.131:59112 deadline: 1689646557311, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=43645 startCode=1689646493716. As of locationSeqNum=15. 2023-07-18 02:14:57,415 DEBUG [hconnection-0x422d8bf2-shared-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-18 02:14:57,419 INFO [RS-EventLoopGroup-7-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:42436, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-18 02:14:57,440 INFO [Listener at localhost/38101] hbase.HBaseTestingUtility(3484): All regions for table Group_testTableMoveTruncateAndDrop assigned to meta. Checking AM states. 2023-07-18 02:14:57,441 INFO [Listener at localhost/38101] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 02:14:57,441 INFO [Listener at localhost/38101] hbase.HBaseTestingUtility(3504): All regions for table Group_testTableMoveTruncateAndDrop assigned. 2023-07-18 02:14:57,441 INFO [Listener at localhost/38101] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 02:14:57,446 DEBUG [Listener at localhost/38101] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-18 02:14:57,449 INFO [RS-EventLoopGroup-4-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:60922, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-18 02:14:57,452 DEBUG [Listener at localhost/38101] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-18 02:14:57,457 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:59130, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-18 02:14:57,458 DEBUG [Listener at localhost/38101] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-18 02:14:57,465 INFO [RS-EventLoopGroup-7-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:42446, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-18 02:14:57,467 DEBUG [Listener at localhost/38101] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-18 02:14:57,469 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:57610, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-18 02:14:57,484 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testTableMoveTruncateAndDrop 2023-07-18 02:14:57,484 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 02:14:57,485 INFO [Listener at localhost/38101] rsgroup.TestRSGroupsAdmin1(307): Moving table Group_testTableMoveTruncateAndDrop to Group_testTableMoveTruncateAndDrop_1141100661 2023-07-18 02:14:57,501 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testTableMoveTruncateAndDrop] to rsgroup Group_testTableMoveTruncateAndDrop_1141100661 2023-07-18 02:14:57,508 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:14:57,508 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1141100661 2023-07-18 02:14:57,509 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 02:14:57,510 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 02:14:57,541 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testTableMoveTruncateAndDrop to RSGroup Group_testTableMoveTruncateAndDrop_1141100661 2023-07-18 02:14:57,542 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(345): Moving region 9081579ad90c011736a6a20282632a80 to RSGroup Group_testTableMoveTruncateAndDrop_1141100661 2023-07-18 02:14:57,542 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 02:14:57,542 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 02:14:57,542 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 02:14:57,543 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 02:14:57,543 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 02:14:57,544 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] procedure2.ProcedureExecutor(1029): Stored pid=29, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9081579ad90c011736a6a20282632a80, REOPEN/MOVE 2023-07-18 02:14:57,545 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(345): Moving region 5b31c79b0c2dd00c2c5b23efa1c80b14 to RSGroup Group_testTableMoveTruncateAndDrop_1141100661 2023-07-18 02:14:57,545 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 02:14:57,545 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 02:14:57,545 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 02:14:57,545 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 02:14:57,545 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 02:14:57,547 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] procedure2.ProcedureExecutor(1029): Stored pid=30, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5b31c79b0c2dd00c2c5b23efa1c80b14, REOPEN/MOVE 2023-07-18 02:14:57,547 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(345): Moving region 55df0e4f2ce9a9ca3676c096f6b5defe to RSGroup Group_testTableMoveTruncateAndDrop_1141100661 2023-07-18 02:14:57,547 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 02:14:57,547 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 02:14:57,547 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 02:14:57,547 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 02:14:57,547 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 02:14:57,549 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] procedure2.ProcedureExecutor(1029): Stored pid=31, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=55df0e4f2ce9a9ca3676c096f6b5defe, REOPEN/MOVE 2023-07-18 02:14:57,549 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(345): Moving region b8f9fa9f57d04072c7900a18782ec9b9 to RSGroup Group_testTableMoveTruncateAndDrop_1141100661 2023-07-18 02:14:57,549 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 02:14:57,549 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 02:14:57,549 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 02:14:57,549 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 02:14:57,549 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 02:14:57,551 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=29, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9081579ad90c011736a6a20282632a80, REOPEN/MOVE 2023-07-18 02:14:57,552 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=30, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5b31c79b0c2dd00c2c5b23efa1c80b14, REOPEN/MOVE 2023-07-18 02:14:57,552 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=31, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=55df0e4f2ce9a9ca3676c096f6b5defe, REOPEN/MOVE 2023-07-18 02:14:57,563 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] procedure2.ProcedureExecutor(1029): Stored pid=32, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b8f9fa9f57d04072c7900a18782ec9b9, REOPEN/MOVE 2023-07-18 02:14:57,564 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(345): Moving region d26a05047c700cd40a14b5289e5087f2 to RSGroup Group_testTableMoveTruncateAndDrop_1141100661 2023-07-18 02:14:57,565 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=32, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b8f9fa9f57d04072c7900a18782ec9b9, REOPEN/MOVE 2023-07-18 02:14:57,568 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=29 updating hbase:meta row=9081579ad90c011736a6a20282632a80, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43645,1689646493716 2023-07-18 02:14:57,568 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 02:14:57,568 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=b8f9fa9f57d04072c7900a18782ec9b9, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,45077,1689646489555 2023-07-18 02:14:57,568 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=31 updating hbase:meta row=55df0e4f2ce9a9ca3676c096f6b5defe, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,45077,1689646489555 2023-07-18 02:14:57,569 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689646496175.b8f9fa9f57d04072c7900a18782ec9b9.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689646497568"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689646497568"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689646497568"}]},"ts":"1689646497568"} 2023-07-18 02:14:57,569 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689646496175.55df0e4f2ce9a9ca3676c096f6b5defe.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689646497568"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689646497568"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689646497568"}]},"ts":"1689646497568"} 2023-07-18 02:14:57,568 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=5b31c79b0c2dd00c2c5b23efa1c80b14, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43645,1689646493716 2023-07-18 02:14:57,569 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689646496175.5b31c79b0c2dd00c2c5b23efa1c80b14.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689646497568"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689646497568"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689646497568"}]},"ts":"1689646497568"} 2023-07-18 02:14:57,568 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689646496175.9081579ad90c011736a6a20282632a80.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689646497567"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689646497567"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689646497567"}]},"ts":"1689646497567"} 2023-07-18 02:14:57,569 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 02:14:57,581 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 02:14:57,581 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 02:14:57,581 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 02:14:57,583 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=34, ppid=32, state=RUNNABLE; CloseRegionProcedure b8f9fa9f57d04072c7900a18782ec9b9, server=jenkins-hbase4.apache.org,45077,1689646489555}] 2023-07-18 02:14:57,584 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] procedure2.ProcedureExecutor(1029): Stored pid=33, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d26a05047c700cd40a14b5289e5087f2, REOPEN/MOVE 2023-07-18 02:14:57,584 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(286): Moving 5 region(s) to group Group_testTableMoveTruncateAndDrop_1141100661, current retry=0 2023-07-18 02:14:57,585 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=35, ppid=31, state=RUNNABLE; CloseRegionProcedure 55df0e4f2ce9a9ca3676c096f6b5defe, server=jenkins-hbase4.apache.org,45077,1689646489555}] 2023-07-18 02:14:57,591 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=36, ppid=29, state=RUNNABLE; CloseRegionProcedure 9081579ad90c011736a6a20282632a80, server=jenkins-hbase4.apache.org,43645,1689646493716}] 2023-07-18 02:14:57,592 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=37, ppid=30, state=RUNNABLE; CloseRegionProcedure 5b31c79b0c2dd00c2c5b23efa1c80b14, server=jenkins-hbase4.apache.org,43645,1689646493716}] 2023-07-18 02:14:57,594 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=33, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d26a05047c700cd40a14b5289e5087f2, REOPEN/MOVE 2023-07-18 02:14:57,602 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=33 updating hbase:meta row=d26a05047c700cd40a14b5289e5087f2, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43645,1689646493716 2023-07-18 02:14:57,602 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689646496175.d26a05047c700cd40a14b5289e5087f2.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689646497602"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689646497602"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689646497602"}]},"ts":"1689646497602"} 2023-07-18 02:14:57,609 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=38, ppid=33, state=RUNNABLE; CloseRegionProcedure d26a05047c700cd40a14b5289e5087f2, server=jenkins-hbase4.apache.org,43645,1689646493716}] 2023-07-18 02:14:57,751 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close b8f9fa9f57d04072c7900a18782ec9b9 2023-07-18 02:14:57,752 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing b8f9fa9f57d04072c7900a18782ec9b9, disabling compactions & flushes 2023-07-18 02:14:57,752 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689646496175.b8f9fa9f57d04072c7900a18782ec9b9. 2023-07-18 02:14:57,752 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689646496175.b8f9fa9f57d04072c7900a18782ec9b9. 2023-07-18 02:14:57,752 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689646496175.b8f9fa9f57d04072c7900a18782ec9b9. after waiting 0 ms 2023-07-18 02:14:57,753 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689646496175.b8f9fa9f57d04072c7900a18782ec9b9. 2023-07-18 02:14:57,773 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close d26a05047c700cd40a14b5289e5087f2 2023-07-18 02:14:57,777 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing d26a05047c700cd40a14b5289e5087f2, disabling compactions & flushes 2023-07-18 02:14:57,778 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689646496175.d26a05047c700cd40a14b5289e5087f2. 2023-07-18 02:14:57,778 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689646496175.d26a05047c700cd40a14b5289e5087f2. 2023-07-18 02:14:57,778 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689646496175.d26a05047c700cd40a14b5289e5087f2. after waiting 0 ms 2023-07-18 02:14:57,778 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689646496175.d26a05047c700cd40a14b5289e5087f2. 2023-07-18 02:14:57,804 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testTableMoveTruncateAndDrop/b8f9fa9f57d04072c7900a18782ec9b9/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 02:14:57,807 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689646496175.b8f9fa9f57d04072c7900a18782ec9b9. 2023-07-18 02:14:57,807 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for b8f9fa9f57d04072c7900a18782ec9b9: 2023-07-18 02:14:57,807 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding b8f9fa9f57d04072c7900a18782ec9b9 move to jenkins-hbase4.apache.org,35063,1689646489808 record at close sequenceid=2 2023-07-18 02:14:57,809 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed b8f9fa9f57d04072c7900a18782ec9b9 2023-07-18 02:14:57,809 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 55df0e4f2ce9a9ca3676c096f6b5defe 2023-07-18 02:14:57,811 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 55df0e4f2ce9a9ca3676c096f6b5defe, disabling compactions & flushes 2023-07-18 02:14:57,811 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689646496175.55df0e4f2ce9a9ca3676c096f6b5defe. 2023-07-18 02:14:57,811 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689646496175.55df0e4f2ce9a9ca3676c096f6b5defe. 2023-07-18 02:14:57,811 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689646496175.55df0e4f2ce9a9ca3676c096f6b5defe. after waiting 0 ms 2023-07-18 02:14:57,811 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689646496175.55df0e4f2ce9a9ca3676c096f6b5defe. 2023-07-18 02:14:57,815 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=b8f9fa9f57d04072c7900a18782ec9b9, regionState=CLOSED 2023-07-18 02:14:57,816 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689646496175.b8f9fa9f57d04072c7900a18782ec9b9.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689646497815"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689646497815"}]},"ts":"1689646497815"} 2023-07-18 02:14:57,823 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testTableMoveTruncateAndDrop/d26a05047c700cd40a14b5289e5087f2/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 02:14:57,827 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689646496175.d26a05047c700cd40a14b5289e5087f2. 2023-07-18 02:14:57,827 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for d26a05047c700cd40a14b5289e5087f2: 2023-07-18 02:14:57,827 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding d26a05047c700cd40a14b5289e5087f2 move to jenkins-hbase4.apache.org,39557,1689646489998 record at close sequenceid=2 2023-07-18 02:14:57,836 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed d26a05047c700cd40a14b5289e5087f2 2023-07-18 02:14:57,836 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 5b31c79b0c2dd00c2c5b23efa1c80b14 2023-07-18 02:14:57,837 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 5b31c79b0c2dd00c2c5b23efa1c80b14, disabling compactions & flushes 2023-07-18 02:14:57,837 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689646496175.5b31c79b0c2dd00c2c5b23efa1c80b14. 2023-07-18 02:14:57,838 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689646496175.5b31c79b0c2dd00c2c5b23efa1c80b14. 2023-07-18 02:14:57,839 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689646496175.5b31c79b0c2dd00c2c5b23efa1c80b14. after waiting 0 ms 2023-07-18 02:14:57,839 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689646496175.5b31c79b0c2dd00c2c5b23efa1c80b14. 2023-07-18 02:14:57,839 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=33 updating hbase:meta row=d26a05047c700cd40a14b5289e5087f2, regionState=CLOSED 2023-07-18 02:14:57,839 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689646496175.d26a05047c700cd40a14b5289e5087f2.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689646497839"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689646497839"}]},"ts":"1689646497839"} 2023-07-18 02:14:57,845 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=34, resume processing ppid=32 2023-07-18 02:14:57,845 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=34, ppid=32, state=SUCCESS; CloseRegionProcedure b8f9fa9f57d04072c7900a18782ec9b9, server=jenkins-hbase4.apache.org,45077,1689646489555 in 245 msec 2023-07-18 02:14:57,847 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=32, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b8f9fa9f57d04072c7900a18782ec9b9, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,35063,1689646489808; forceNewPlan=false, retain=false 2023-07-18 02:14:57,849 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=38, resume processing ppid=33 2023-07-18 02:14:57,849 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=38, ppid=33, state=SUCCESS; CloseRegionProcedure d26a05047c700cd40a14b5289e5087f2, server=jenkins-hbase4.apache.org,43645,1689646493716 in 234 msec 2023-07-18 02:14:57,850 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=33, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d26a05047c700cd40a14b5289e5087f2, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,39557,1689646489998; forceNewPlan=false, retain=false 2023-07-18 02:14:57,855 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testTableMoveTruncateAndDrop/55df0e4f2ce9a9ca3676c096f6b5defe/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 02:14:57,856 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689646496175.55df0e4f2ce9a9ca3676c096f6b5defe. 2023-07-18 02:14:57,856 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 55df0e4f2ce9a9ca3676c096f6b5defe: 2023-07-18 02:14:57,856 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 55df0e4f2ce9a9ca3676c096f6b5defe move to jenkins-hbase4.apache.org,35063,1689646489808 record at close sequenceid=2 2023-07-18 02:14:57,860 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 55df0e4f2ce9a9ca3676c096f6b5defe 2023-07-18 02:14:57,860 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=31 updating hbase:meta row=55df0e4f2ce9a9ca3676c096f6b5defe, regionState=CLOSED 2023-07-18 02:14:57,861 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689646496175.55df0e4f2ce9a9ca3676c096f6b5defe.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689646497860"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689646497860"}]},"ts":"1689646497860"} 2023-07-18 02:14:57,866 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=35, resume processing ppid=31 2023-07-18 02:14:57,866 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=35, ppid=31, state=SUCCESS; CloseRegionProcedure 55df0e4f2ce9a9ca3676c096f6b5defe, server=jenkins-hbase4.apache.org,45077,1689646489555 in 278 msec 2023-07-18 02:14:57,867 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=31, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=55df0e4f2ce9a9ca3676c096f6b5defe, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,35063,1689646489808; forceNewPlan=false, retain=false 2023-07-18 02:14:57,879 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testTableMoveTruncateAndDrop/5b31c79b0c2dd00c2c5b23efa1c80b14/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 02:14:57,880 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689646496175.5b31c79b0c2dd00c2c5b23efa1c80b14. 2023-07-18 02:14:57,881 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 5b31c79b0c2dd00c2c5b23efa1c80b14: 2023-07-18 02:14:57,881 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 5b31c79b0c2dd00c2c5b23efa1c80b14 move to jenkins-hbase4.apache.org,39557,1689646489998 record at close sequenceid=2 2023-07-18 02:14:57,885 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 5b31c79b0c2dd00c2c5b23efa1c80b14 2023-07-18 02:14:57,885 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 9081579ad90c011736a6a20282632a80 2023-07-18 02:14:57,886 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 9081579ad90c011736a6a20282632a80, disabling compactions & flushes 2023-07-18 02:14:57,886 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689646496175.9081579ad90c011736a6a20282632a80. 2023-07-18 02:14:57,886 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=5b31c79b0c2dd00c2c5b23efa1c80b14, regionState=CLOSED 2023-07-18 02:14:57,887 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689646496175.9081579ad90c011736a6a20282632a80. 2023-07-18 02:14:57,887 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689646496175.9081579ad90c011736a6a20282632a80. after waiting 0 ms 2023-07-18 02:14:57,887 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689646496175.5b31c79b0c2dd00c2c5b23efa1c80b14.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689646497886"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689646497886"}]},"ts":"1689646497886"} 2023-07-18 02:14:57,887 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689646496175.9081579ad90c011736a6a20282632a80. 2023-07-18 02:14:57,892 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=37, resume processing ppid=30 2023-07-18 02:14:57,892 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=37, ppid=30, state=SUCCESS; CloseRegionProcedure 5b31c79b0c2dd00c2c5b23efa1c80b14, server=jenkins-hbase4.apache.org,43645,1689646493716 in 297 msec 2023-07-18 02:14:57,893 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=30, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5b31c79b0c2dd00c2c5b23efa1c80b14, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,39557,1689646489998; forceNewPlan=false, retain=false 2023-07-18 02:14:57,904 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testTableMoveTruncateAndDrop/9081579ad90c011736a6a20282632a80/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 02:14:57,906 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689646496175.9081579ad90c011736a6a20282632a80. 2023-07-18 02:14:57,906 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 9081579ad90c011736a6a20282632a80: 2023-07-18 02:14:57,906 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 9081579ad90c011736a6a20282632a80 move to jenkins-hbase4.apache.org,35063,1689646489808 record at close sequenceid=2 2023-07-18 02:14:57,911 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 9081579ad90c011736a6a20282632a80 2023-07-18 02:14:57,912 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=29 updating hbase:meta row=9081579ad90c011736a6a20282632a80, regionState=CLOSED 2023-07-18 02:14:57,912 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689646496175.9081579ad90c011736a6a20282632a80.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689646497912"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689646497912"}]},"ts":"1689646497912"} 2023-07-18 02:14:57,917 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=36, resume processing ppid=29 2023-07-18 02:14:57,918 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=36, ppid=29, state=SUCCESS; CloseRegionProcedure 9081579ad90c011736a6a20282632a80, server=jenkins-hbase4.apache.org,43645,1689646493716 in 323 msec 2023-07-18 02:14:57,919 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=29, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9081579ad90c011736a6a20282632a80, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,35063,1689646489808; forceNewPlan=false, retain=false 2023-07-18 02:14:57,998 INFO [jenkins-hbase4:40909] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-18 02:14:57,998 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=5b31c79b0c2dd00c2c5b23efa1c80b14, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39557,1689646489998 2023-07-18 02:14:57,998 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=33 updating hbase:meta row=d26a05047c700cd40a14b5289e5087f2, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39557,1689646489998 2023-07-18 02:14:57,999 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689646496175.5b31c79b0c2dd00c2c5b23efa1c80b14.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689646497998"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689646497998"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689646497998"}]},"ts":"1689646497998"} 2023-07-18 02:14:57,999 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689646496175.d26a05047c700cd40a14b5289e5087f2.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689646497998"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689646497998"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689646497998"}]},"ts":"1689646497998"} 2023-07-18 02:14:57,999 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=b8f9fa9f57d04072c7900a18782ec9b9, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,35063,1689646489808 2023-07-18 02:14:57,999 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689646496175.b8f9fa9f57d04072c7900a18782ec9b9.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689646497999"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689646497999"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689646497999"}]},"ts":"1689646497999"} 2023-07-18 02:14:58,000 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=31 updating hbase:meta row=55df0e4f2ce9a9ca3676c096f6b5defe, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,35063,1689646489808 2023-07-18 02:14:58,000 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689646496175.55df0e4f2ce9a9ca3676c096f6b5defe.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689646497999"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689646497999"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689646497999"}]},"ts":"1689646497999"} 2023-07-18 02:14:58,000 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=29 updating hbase:meta row=9081579ad90c011736a6a20282632a80, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,35063,1689646489808 2023-07-18 02:14:58,000 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689646496175.9081579ad90c011736a6a20282632a80.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689646498000"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689646498000"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689646498000"}]},"ts":"1689646498000"} 2023-07-18 02:14:58,003 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=39, ppid=30, state=RUNNABLE; OpenRegionProcedure 5b31c79b0c2dd00c2c5b23efa1c80b14, server=jenkins-hbase4.apache.org,39557,1689646489998}] 2023-07-18 02:14:58,005 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=40, ppid=33, state=RUNNABLE; OpenRegionProcedure d26a05047c700cd40a14b5289e5087f2, server=jenkins-hbase4.apache.org,39557,1689646489998}] 2023-07-18 02:14:58,008 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=41, ppid=32, state=RUNNABLE; OpenRegionProcedure b8f9fa9f57d04072c7900a18782ec9b9, server=jenkins-hbase4.apache.org,35063,1689646489808}] 2023-07-18 02:14:58,012 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=42, ppid=31, state=RUNNABLE; OpenRegionProcedure 55df0e4f2ce9a9ca3676c096f6b5defe, server=jenkins-hbase4.apache.org,35063,1689646489808}] 2023-07-18 02:14:58,013 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=43, ppid=29, state=RUNNABLE; OpenRegionProcedure 9081579ad90c011736a6a20282632a80, server=jenkins-hbase4.apache.org,35063,1689646489808}] 2023-07-18 02:14:58,117 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-18 02:14:58,175 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689646496175.d26a05047c700cd40a14b5289e5087f2. 2023-07-18 02:14:58,175 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => d26a05047c700cd40a14b5289e5087f2, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689646496175.d26a05047c700cd40a14b5289e5087f2.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-18 02:14:58,176 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop d26a05047c700cd40a14b5289e5087f2 2023-07-18 02:14:58,176 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689646496175.d26a05047c700cd40a14b5289e5087f2.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 02:14:58,176 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for d26a05047c700cd40a14b5289e5087f2 2023-07-18 02:14:58,176 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for d26a05047c700cd40a14b5289e5087f2 2023-07-18 02:14:58,188 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689646496175.b8f9fa9f57d04072c7900a18782ec9b9. 2023-07-18 02:14:58,188 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b8f9fa9f57d04072c7900a18782ec9b9, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689646496175.b8f9fa9f57d04072c7900a18782ec9b9.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-18 02:14:58,189 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop b8f9fa9f57d04072c7900a18782ec9b9 2023-07-18 02:14:58,189 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689646496175.b8f9fa9f57d04072c7900a18782ec9b9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 02:14:58,189 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for b8f9fa9f57d04072c7900a18782ec9b9 2023-07-18 02:14:58,189 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for b8f9fa9f57d04072c7900a18782ec9b9 2023-07-18 02:14:58,193 INFO [StoreOpener-d26a05047c700cd40a14b5289e5087f2-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region d26a05047c700cd40a14b5289e5087f2 2023-07-18 02:14:58,193 INFO [StoreOpener-b8f9fa9f57d04072c7900a18782ec9b9-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region b8f9fa9f57d04072c7900a18782ec9b9 2023-07-18 02:14:58,196 DEBUG [StoreOpener-d26a05047c700cd40a14b5289e5087f2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testTableMoveTruncateAndDrop/d26a05047c700cd40a14b5289e5087f2/f 2023-07-18 02:14:58,196 DEBUG [StoreOpener-d26a05047c700cd40a14b5289e5087f2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testTableMoveTruncateAndDrop/d26a05047c700cd40a14b5289e5087f2/f 2023-07-18 02:14:58,197 INFO [StoreOpener-d26a05047c700cd40a14b5289e5087f2-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region d26a05047c700cd40a14b5289e5087f2 columnFamilyName f 2023-07-18 02:14:58,199 DEBUG [StoreOpener-b8f9fa9f57d04072c7900a18782ec9b9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testTableMoveTruncateAndDrop/b8f9fa9f57d04072c7900a18782ec9b9/f 2023-07-18 02:14:58,199 DEBUG [StoreOpener-b8f9fa9f57d04072c7900a18782ec9b9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testTableMoveTruncateAndDrop/b8f9fa9f57d04072c7900a18782ec9b9/f 2023-07-18 02:14:58,200 INFO [StoreOpener-b8f9fa9f57d04072c7900a18782ec9b9-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b8f9fa9f57d04072c7900a18782ec9b9 columnFamilyName f 2023-07-18 02:14:58,200 INFO [StoreOpener-b8f9fa9f57d04072c7900a18782ec9b9-1] regionserver.HStore(310): Store=b8f9fa9f57d04072c7900a18782ec9b9/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 02:14:58,203 INFO [StoreOpener-d26a05047c700cd40a14b5289e5087f2-1] regionserver.HStore(310): Store=d26a05047c700cd40a14b5289e5087f2/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 02:14:58,204 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testTableMoveTruncateAndDrop/b8f9fa9f57d04072c7900a18782ec9b9 2023-07-18 02:14:58,207 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testTableMoveTruncateAndDrop/d26a05047c700cd40a14b5289e5087f2 2023-07-18 02:14:58,207 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testTableMoveTruncateAndDrop/b8f9fa9f57d04072c7900a18782ec9b9 2023-07-18 02:14:58,212 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testTableMoveTruncateAndDrop/d26a05047c700cd40a14b5289e5087f2 2023-07-18 02:14:58,221 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for b8f9fa9f57d04072c7900a18782ec9b9 2023-07-18 02:14:58,222 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened b8f9fa9f57d04072c7900a18782ec9b9; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10191648160, jitterRate=-0.05082879960536957}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 02:14:58,222 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for b8f9fa9f57d04072c7900a18782ec9b9: 2023-07-18 02:14:58,222 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for d26a05047c700cd40a14b5289e5087f2 2023-07-18 02:14:58,225 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened d26a05047c700cd40a14b5289e5087f2; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9966257600, jitterRate=-0.07181993126869202}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 02:14:58,225 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for d26a05047c700cd40a14b5289e5087f2: 2023-07-18 02:14:58,226 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689646496175.b8f9fa9f57d04072c7900a18782ec9b9., pid=41, masterSystemTime=1689646498163 2023-07-18 02:14:58,226 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689646496175.d26a05047c700cd40a14b5289e5087f2., pid=40, masterSystemTime=1689646498158 2023-07-18 02:14:58,233 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689646496175.b8f9fa9f57d04072c7900a18782ec9b9. 2023-07-18 02:14:58,233 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689646496175.b8f9fa9f57d04072c7900a18782ec9b9. 2023-07-18 02:14:58,233 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689646496175.9081579ad90c011736a6a20282632a80. 2023-07-18 02:14:58,233 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 9081579ad90c011736a6a20282632a80, NAME => 'Group_testTableMoveTruncateAndDrop,,1689646496175.9081579ad90c011736a6a20282632a80.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-18 02:14:58,233 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 9081579ad90c011736a6a20282632a80 2023-07-18 02:14:58,234 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689646496175.9081579ad90c011736a6a20282632a80.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 02:14:58,234 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 9081579ad90c011736a6a20282632a80 2023-07-18 02:14:58,234 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 9081579ad90c011736a6a20282632a80 2023-07-18 02:14:58,238 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=b8f9fa9f57d04072c7900a18782ec9b9, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,35063,1689646489808 2023-07-18 02:14:58,238 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689646496175.d26a05047c700cd40a14b5289e5087f2. 2023-07-18 02:14:58,238 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689646496175.d26a05047c700cd40a14b5289e5087f2. 2023-07-18 02:14:58,238 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689646496175.5b31c79b0c2dd00c2c5b23efa1c80b14. 2023-07-18 02:14:58,238 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689646496175.b8f9fa9f57d04072c7900a18782ec9b9.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689646498238"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689646498238"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689646498238"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689646498238"}]},"ts":"1689646498238"} 2023-07-18 02:14:58,238 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 5b31c79b0c2dd00c2c5b23efa1c80b14, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689646496175.5b31c79b0c2dd00c2c5b23efa1c80b14.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-18 02:14:58,239 INFO [StoreOpener-9081579ad90c011736a6a20282632a80-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 9081579ad90c011736a6a20282632a80 2023-07-18 02:14:58,239 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 5b31c79b0c2dd00c2c5b23efa1c80b14 2023-07-18 02:14:58,239 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689646496175.5b31c79b0c2dd00c2c5b23efa1c80b14.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 02:14:58,239 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 5b31c79b0c2dd00c2c5b23efa1c80b14 2023-07-18 02:14:58,239 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 5b31c79b0c2dd00c2c5b23efa1c80b14 2023-07-18 02:14:58,241 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=33 updating hbase:meta row=d26a05047c700cd40a14b5289e5087f2, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,39557,1689646489998 2023-07-18 02:14:58,241 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689646496175.d26a05047c700cd40a14b5289e5087f2.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689646498240"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689646498240"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689646498240"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689646498240"}]},"ts":"1689646498240"} 2023-07-18 02:14:58,241 DEBUG [StoreOpener-9081579ad90c011736a6a20282632a80-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testTableMoveTruncateAndDrop/9081579ad90c011736a6a20282632a80/f 2023-07-18 02:14:58,241 DEBUG [StoreOpener-9081579ad90c011736a6a20282632a80-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testTableMoveTruncateAndDrop/9081579ad90c011736a6a20282632a80/f 2023-07-18 02:14:58,241 INFO [StoreOpener-5b31c79b0c2dd00c2c5b23efa1c80b14-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 5b31c79b0c2dd00c2c5b23efa1c80b14 2023-07-18 02:14:58,242 INFO [StoreOpener-9081579ad90c011736a6a20282632a80-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 9081579ad90c011736a6a20282632a80 columnFamilyName f 2023-07-18 02:14:58,243 INFO [StoreOpener-9081579ad90c011736a6a20282632a80-1] regionserver.HStore(310): Store=9081579ad90c011736a6a20282632a80/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 02:14:58,247 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testTableMoveTruncateAndDrop/9081579ad90c011736a6a20282632a80 2023-07-18 02:14:58,249 DEBUG [StoreOpener-5b31c79b0c2dd00c2c5b23efa1c80b14-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testTableMoveTruncateAndDrop/5b31c79b0c2dd00c2c5b23efa1c80b14/f 2023-07-18 02:14:58,249 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testTableMoveTruncateAndDrop/9081579ad90c011736a6a20282632a80 2023-07-18 02:14:58,249 DEBUG [StoreOpener-5b31c79b0c2dd00c2c5b23efa1c80b14-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testTableMoveTruncateAndDrop/5b31c79b0c2dd00c2c5b23efa1c80b14/f 2023-07-18 02:14:58,249 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=41, resume processing ppid=32 2023-07-18 02:14:58,249 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=41, ppid=32, state=SUCCESS; OpenRegionProcedure b8f9fa9f57d04072c7900a18782ec9b9, server=jenkins-hbase4.apache.org,35063,1689646489808 in 233 msec 2023-07-18 02:14:58,252 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=40, resume processing ppid=33 2023-07-18 02:14:58,252 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=32, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b8f9fa9f57d04072c7900a18782ec9b9, REOPEN/MOVE in 700 msec 2023-07-18 02:14:58,252 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=40, ppid=33, state=SUCCESS; OpenRegionProcedure d26a05047c700cd40a14b5289e5087f2, server=jenkins-hbase4.apache.org,39557,1689646489998 in 239 msec 2023-07-18 02:14:58,254 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=33, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d26a05047c700cd40a14b5289e5087f2, REOPEN/MOVE in 671 msec 2023-07-18 02:14:58,254 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 9081579ad90c011736a6a20282632a80 2023-07-18 02:14:58,254 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-18 02:14:58,255 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver Metrics about HBase MasterObservers 2023-07-18 02:14:58,255 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-18 02:14:58,255 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-07-18 02:14:58,255 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-18 02:14:58,255 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint Metrics about HBase MasterObservers 2023-07-18 02:14:58,256 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 9081579ad90c011736a6a20282632a80; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11232458560, jitterRate=0.046104222536087036}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 02:14:58,256 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 9081579ad90c011736a6a20282632a80: 2023-07-18 02:14:58,256 INFO [StoreOpener-5b31c79b0c2dd00c2c5b23efa1c80b14-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 5b31c79b0c2dd00c2c5b23efa1c80b14 columnFamilyName f 2023-07-18 02:14:58,257 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-18 02:14:58,257 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689646496175.9081579ad90c011736a6a20282632a80., pid=43, masterSystemTime=1689646498163 2023-07-18 02:14:58,257 INFO [StoreOpener-5b31c79b0c2dd00c2c5b23efa1c80b14-1] regionserver.HStore(310): Store=5b31c79b0c2dd00c2c5b23efa1c80b14/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 02:14:58,258 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'Group_testTableMoveTruncateAndDrop' 2023-07-18 02:14:58,259 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testTableMoveTruncateAndDrop/5b31c79b0c2dd00c2c5b23efa1c80b14 2023-07-18 02:14:58,260 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689646496175.9081579ad90c011736a6a20282632a80. 2023-07-18 02:14:58,260 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689646496175.9081579ad90c011736a6a20282632a80. 2023-07-18 02:14:58,260 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689646496175.55df0e4f2ce9a9ca3676c096f6b5defe. 2023-07-18 02:14:58,260 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 55df0e4f2ce9a9ca3676c096f6b5defe, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689646496175.55df0e4f2ce9a9ca3676c096f6b5defe.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-18 02:14:58,260 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 55df0e4f2ce9a9ca3676c096f6b5defe 2023-07-18 02:14:58,260 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689646496175.55df0e4f2ce9a9ca3676c096f6b5defe.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 02:14:58,260 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 55df0e4f2ce9a9ca3676c096f6b5defe 2023-07-18 02:14:58,261 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 55df0e4f2ce9a9ca3676c096f6b5defe 2023-07-18 02:14:58,261 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testTableMoveTruncateAndDrop/5b31c79b0c2dd00c2c5b23efa1c80b14 2023-07-18 02:14:58,261 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=29 updating hbase:meta row=9081579ad90c011736a6a20282632a80, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,35063,1689646489808 2023-07-18 02:14:58,262 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689646496175.9081579ad90c011736a6a20282632a80.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689646498261"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689646498261"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689646498261"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689646498261"}]},"ts":"1689646498261"} 2023-07-18 02:14:58,263 INFO [StoreOpener-55df0e4f2ce9a9ca3676c096f6b5defe-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 55df0e4f2ce9a9ca3676c096f6b5defe 2023-07-18 02:14:58,264 DEBUG [StoreOpener-55df0e4f2ce9a9ca3676c096f6b5defe-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testTableMoveTruncateAndDrop/55df0e4f2ce9a9ca3676c096f6b5defe/f 2023-07-18 02:14:58,264 DEBUG [StoreOpener-55df0e4f2ce9a9ca3676c096f6b5defe-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testTableMoveTruncateAndDrop/55df0e4f2ce9a9ca3676c096f6b5defe/f 2023-07-18 02:14:58,265 INFO [StoreOpener-55df0e4f2ce9a9ca3676c096f6b5defe-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 55df0e4f2ce9a9ca3676c096f6b5defe columnFamilyName f 2023-07-18 02:14:58,266 INFO [StoreOpener-55df0e4f2ce9a9ca3676c096f6b5defe-1] regionserver.HStore(310): Store=55df0e4f2ce9a9ca3676c096f6b5defe/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 02:14:58,268 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=43, resume processing ppid=29 2023-07-18 02:14:58,268 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=43, ppid=29, state=SUCCESS; OpenRegionProcedure 9081579ad90c011736a6a20282632a80, server=jenkins-hbase4.apache.org,35063,1689646489808 in 251 msec 2023-07-18 02:14:58,270 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=29, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9081579ad90c011736a6a20282632a80, REOPEN/MOVE in 725 msec 2023-07-18 02:14:58,271 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testTableMoveTruncateAndDrop/55df0e4f2ce9a9ca3676c096f6b5defe 2023-07-18 02:14:58,271 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 5b31c79b0c2dd00c2c5b23efa1c80b14 2023-07-18 02:14:58,272 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testTableMoveTruncateAndDrop/55df0e4f2ce9a9ca3676c096f6b5defe 2023-07-18 02:14:58,274 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 5b31c79b0c2dd00c2c5b23efa1c80b14; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9571660960, jitterRate=-0.10856960713863373}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 02:14:58,274 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 5b31c79b0c2dd00c2c5b23efa1c80b14: 2023-07-18 02:14:58,275 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689646496175.5b31c79b0c2dd00c2c5b23efa1c80b14., pid=39, masterSystemTime=1689646498158 2023-07-18 02:14:58,277 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689646496175.5b31c79b0c2dd00c2c5b23efa1c80b14. 2023-07-18 02:14:58,277 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689646496175.5b31c79b0c2dd00c2c5b23efa1c80b14. 2023-07-18 02:14:58,278 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=5b31c79b0c2dd00c2c5b23efa1c80b14, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,39557,1689646489998 2023-07-18 02:14:58,278 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 55df0e4f2ce9a9ca3676c096f6b5defe 2023-07-18 02:14:58,278 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689646496175.5b31c79b0c2dd00c2c5b23efa1c80b14.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689646498278"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689646498278"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689646498278"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689646498278"}]},"ts":"1689646498278"} 2023-07-18 02:14:58,280 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 55df0e4f2ce9a9ca3676c096f6b5defe; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11908154560, jitterRate=0.10903331637382507}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 02:14:58,280 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 55df0e4f2ce9a9ca3676c096f6b5defe: 2023-07-18 02:14:58,281 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689646496175.55df0e4f2ce9a9ca3676c096f6b5defe., pid=42, masterSystemTime=1689646498163 2023-07-18 02:14:58,283 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689646496175.55df0e4f2ce9a9ca3676c096f6b5defe. 2023-07-18 02:14:58,283 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689646496175.55df0e4f2ce9a9ca3676c096f6b5defe. 2023-07-18 02:14:58,284 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=39, resume processing ppid=30 2023-07-18 02:14:58,284 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=31 updating hbase:meta row=55df0e4f2ce9a9ca3676c096f6b5defe, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,35063,1689646489808 2023-07-18 02:14:58,285 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689646496175.55df0e4f2ce9a9ca3676c096f6b5defe.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689646498284"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689646498284"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689646498284"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689646498284"}]},"ts":"1689646498284"} 2023-07-18 02:14:58,284 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=39, ppid=30, state=SUCCESS; OpenRegionProcedure 5b31c79b0c2dd00c2c5b23efa1c80b14, server=jenkins-hbase4.apache.org,39557,1689646489998 in 278 msec 2023-07-18 02:14:58,287 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=30, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5b31c79b0c2dd00c2c5b23efa1c80b14, REOPEN/MOVE in 739 msec 2023-07-18 02:14:58,290 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=42, resume processing ppid=31 2023-07-18 02:14:58,290 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=42, ppid=31, state=SUCCESS; OpenRegionProcedure 55df0e4f2ce9a9ca3676c096f6b5defe, server=jenkins-hbase4.apache.org,35063,1689646489808 in 276 msec 2023-07-18 02:14:58,292 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=31, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=55df0e4f2ce9a9ca3676c096f6b5defe, REOPEN/MOVE in 743 msec 2023-07-18 02:14:58,584 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] procedure.ProcedureSyncWait(216): waitFor pid=29 2023-07-18 02:14:58,584 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testTableMoveTruncateAndDrop] moved to target group Group_testTableMoveTruncateAndDrop_1141100661. 2023-07-18 02:14:58,584 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 02:14:58,588 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:14:58,588 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:14:58,591 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testTableMoveTruncateAndDrop 2023-07-18 02:14:58,591 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 02:14:58,592 INFO [Listener at localhost/38101] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 02:14:58,598 INFO [Listener at localhost/38101] client.HBaseAdmin$15(890): Started disable of Group_testTableMoveTruncateAndDrop 2023-07-18 02:14:58,603 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testTableMoveTruncateAndDrop 2023-07-18 02:14:58,612 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] procedure2.ProcedureExecutor(1029): Stored pid=44, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-18 02:14:58,617 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689646498617"}]},"ts":"1689646498617"} 2023-07-18 02:14:58,619 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(1230): Checking to see if procedure is done pid=44 2023-07-18 02:14:58,620 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLING in hbase:meta 2023-07-18 02:14:58,622 INFO [PEWorker-1] procedure.DisableTableProcedure(293): Set Group_testTableMoveTruncateAndDrop to state=DISABLING 2023-07-18 02:14:58,626 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=45, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9081579ad90c011736a6a20282632a80, UNASSIGN}, {pid=46, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5b31c79b0c2dd00c2c5b23efa1c80b14, UNASSIGN}, {pid=47, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=55df0e4f2ce9a9ca3676c096f6b5defe, UNASSIGN}, {pid=48, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b8f9fa9f57d04072c7900a18782ec9b9, UNASSIGN}, {pid=49, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d26a05047c700cd40a14b5289e5087f2, UNASSIGN}] 2023-07-18 02:14:58,629 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=48, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b8f9fa9f57d04072c7900a18782ec9b9, UNASSIGN 2023-07-18 02:14:58,629 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=46, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5b31c79b0c2dd00c2c5b23efa1c80b14, UNASSIGN 2023-07-18 02:14:58,629 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=47, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=55df0e4f2ce9a9ca3676c096f6b5defe, UNASSIGN 2023-07-18 02:14:58,629 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=49, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d26a05047c700cd40a14b5289e5087f2, UNASSIGN 2023-07-18 02:14:58,629 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=45, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9081579ad90c011736a6a20282632a80, UNASSIGN 2023-07-18 02:14:58,630 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=48 updating hbase:meta row=b8f9fa9f57d04072c7900a18782ec9b9, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,35063,1689646489808 2023-07-18 02:14:58,630 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=47 updating hbase:meta row=55df0e4f2ce9a9ca3676c096f6b5defe, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,35063,1689646489808 2023-07-18 02:14:58,630 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689646496175.b8f9fa9f57d04072c7900a18782ec9b9.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689646498630"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689646498630"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689646498630"}]},"ts":"1689646498630"} 2023-07-18 02:14:58,630 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=46 updating hbase:meta row=5b31c79b0c2dd00c2c5b23efa1c80b14, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,39557,1689646489998 2023-07-18 02:14:58,630 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689646496175.55df0e4f2ce9a9ca3676c096f6b5defe.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689646498630"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689646498630"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689646498630"}]},"ts":"1689646498630"} 2023-07-18 02:14:58,631 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689646496175.5b31c79b0c2dd00c2c5b23efa1c80b14.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689646498630"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689646498630"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689646498630"}]},"ts":"1689646498630"} 2023-07-18 02:14:58,631 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=49 updating hbase:meta row=d26a05047c700cd40a14b5289e5087f2, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,39557,1689646489998 2023-07-18 02:14:58,631 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=45 updating hbase:meta row=9081579ad90c011736a6a20282632a80, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,35063,1689646489808 2023-07-18 02:14:58,631 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689646496175.d26a05047c700cd40a14b5289e5087f2.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689646498631"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689646498631"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689646498631"}]},"ts":"1689646498631"} 2023-07-18 02:14:58,631 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689646496175.9081579ad90c011736a6a20282632a80.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689646498631"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689646498631"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689646498631"}]},"ts":"1689646498631"} 2023-07-18 02:14:58,634 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=50, ppid=48, state=RUNNABLE; CloseRegionProcedure b8f9fa9f57d04072c7900a18782ec9b9, server=jenkins-hbase4.apache.org,35063,1689646489808}] 2023-07-18 02:14:58,635 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=51, ppid=47, state=RUNNABLE; CloseRegionProcedure 55df0e4f2ce9a9ca3676c096f6b5defe, server=jenkins-hbase4.apache.org,35063,1689646489808}] 2023-07-18 02:14:58,636 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=52, ppid=46, state=RUNNABLE; CloseRegionProcedure 5b31c79b0c2dd00c2c5b23efa1c80b14, server=jenkins-hbase4.apache.org,39557,1689646489998}] 2023-07-18 02:14:58,638 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=53, ppid=49, state=RUNNABLE; CloseRegionProcedure d26a05047c700cd40a14b5289e5087f2, server=jenkins-hbase4.apache.org,39557,1689646489998}] 2023-07-18 02:14:58,638 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=54, ppid=45, state=RUNNABLE; CloseRegionProcedure 9081579ad90c011736a6a20282632a80, server=jenkins-hbase4.apache.org,35063,1689646489808}] 2023-07-18 02:14:58,721 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(1230): Checking to see if procedure is done pid=44 2023-07-18 02:14:58,790 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 55df0e4f2ce9a9ca3676c096f6b5defe 2023-07-18 02:14:58,791 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 55df0e4f2ce9a9ca3676c096f6b5defe, disabling compactions & flushes 2023-07-18 02:14:58,792 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689646496175.55df0e4f2ce9a9ca3676c096f6b5defe. 2023-07-18 02:14:58,792 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689646496175.55df0e4f2ce9a9ca3676c096f6b5defe. 2023-07-18 02:14:58,792 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689646496175.55df0e4f2ce9a9ca3676c096f6b5defe. after waiting 0 ms 2023-07-18 02:14:58,792 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689646496175.55df0e4f2ce9a9ca3676c096f6b5defe. 2023-07-18 02:14:58,793 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 5b31c79b0c2dd00c2c5b23efa1c80b14 2023-07-18 02:14:58,794 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 5b31c79b0c2dd00c2c5b23efa1c80b14, disabling compactions & flushes 2023-07-18 02:14:58,794 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689646496175.5b31c79b0c2dd00c2c5b23efa1c80b14. 2023-07-18 02:14:58,794 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689646496175.5b31c79b0c2dd00c2c5b23efa1c80b14. 2023-07-18 02:14:58,794 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689646496175.5b31c79b0c2dd00c2c5b23efa1c80b14. after waiting 0 ms 2023-07-18 02:14:58,795 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689646496175.5b31c79b0c2dd00c2c5b23efa1c80b14. 2023-07-18 02:14:58,799 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testTableMoveTruncateAndDrop/55df0e4f2ce9a9ca3676c096f6b5defe/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-18 02:14:58,800 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689646496175.55df0e4f2ce9a9ca3676c096f6b5defe. 2023-07-18 02:14:58,800 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 55df0e4f2ce9a9ca3676c096f6b5defe: 2023-07-18 02:14:58,805 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 55df0e4f2ce9a9ca3676c096f6b5defe 2023-07-18 02:14:58,805 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 9081579ad90c011736a6a20282632a80 2023-07-18 02:14:58,806 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 9081579ad90c011736a6a20282632a80, disabling compactions & flushes 2023-07-18 02:14:58,806 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689646496175.9081579ad90c011736a6a20282632a80. 2023-07-18 02:14:58,806 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689646496175.9081579ad90c011736a6a20282632a80. 2023-07-18 02:14:58,806 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689646496175.9081579ad90c011736a6a20282632a80. after waiting 0 ms 2023-07-18 02:14:58,806 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689646496175.9081579ad90c011736a6a20282632a80. 2023-07-18 02:14:58,807 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=47 updating hbase:meta row=55df0e4f2ce9a9ca3676c096f6b5defe, regionState=CLOSED 2023-07-18 02:14:58,807 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689646496175.55df0e4f2ce9a9ca3676c096f6b5defe.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689646498807"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689646498807"}]},"ts":"1689646498807"} 2023-07-18 02:14:58,814 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testTableMoveTruncateAndDrop/5b31c79b0c2dd00c2c5b23efa1c80b14/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-18 02:14:58,815 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689646496175.5b31c79b0c2dd00c2c5b23efa1c80b14. 2023-07-18 02:14:58,816 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 5b31c79b0c2dd00c2c5b23efa1c80b14: 2023-07-18 02:14:58,818 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=51, resume processing ppid=47 2023-07-18 02:14:58,818 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=51, ppid=47, state=SUCCESS; CloseRegionProcedure 55df0e4f2ce9a9ca3676c096f6b5defe, server=jenkins-hbase4.apache.org,35063,1689646489808 in 175 msec 2023-07-18 02:14:58,818 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 5b31c79b0c2dd00c2c5b23efa1c80b14 2023-07-18 02:14:58,818 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close d26a05047c700cd40a14b5289e5087f2 2023-07-18 02:14:58,820 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing d26a05047c700cd40a14b5289e5087f2, disabling compactions & flushes 2023-07-18 02:14:58,820 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689646496175.d26a05047c700cd40a14b5289e5087f2. 2023-07-18 02:14:58,820 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689646496175.d26a05047c700cd40a14b5289e5087f2. 2023-07-18 02:14:58,820 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689646496175.d26a05047c700cd40a14b5289e5087f2. after waiting 0 ms 2023-07-18 02:14:58,820 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689646496175.d26a05047c700cd40a14b5289e5087f2. 2023-07-18 02:14:58,821 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=46 updating hbase:meta row=5b31c79b0c2dd00c2c5b23efa1c80b14, regionState=CLOSED 2023-07-18 02:14:58,821 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689646496175.5b31c79b0c2dd00c2c5b23efa1c80b14.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689646498819"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689646498819"}]},"ts":"1689646498819"} 2023-07-18 02:14:58,822 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=47, ppid=44, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=55df0e4f2ce9a9ca3676c096f6b5defe, UNASSIGN in 194 msec 2023-07-18 02:14:58,824 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testTableMoveTruncateAndDrop/9081579ad90c011736a6a20282632a80/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-18 02:14:58,825 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689646496175.9081579ad90c011736a6a20282632a80. 2023-07-18 02:14:58,825 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 9081579ad90c011736a6a20282632a80: 2023-07-18 02:14:58,830 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testTableMoveTruncateAndDrop/d26a05047c700cd40a14b5289e5087f2/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-18 02:14:58,841 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 9081579ad90c011736a6a20282632a80 2023-07-18 02:14:58,842 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689646496175.d26a05047c700cd40a14b5289e5087f2. 2023-07-18 02:14:58,843 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close b8f9fa9f57d04072c7900a18782ec9b9 2023-07-18 02:14:58,843 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for d26a05047c700cd40a14b5289e5087f2: 2023-07-18 02:14:58,851 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=45 updating hbase:meta row=9081579ad90c011736a6a20282632a80, regionState=CLOSED 2023-07-18 02:14:58,852 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689646496175.9081579ad90c011736a6a20282632a80.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689646498851"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689646498851"}]},"ts":"1689646498851"} 2023-07-18 02:14:58,844 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing b8f9fa9f57d04072c7900a18782ec9b9, disabling compactions & flushes 2023-07-18 02:14:58,856 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689646496175.b8f9fa9f57d04072c7900a18782ec9b9. 2023-07-18 02:14:58,856 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689646496175.b8f9fa9f57d04072c7900a18782ec9b9. 2023-07-18 02:14:58,856 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689646496175.b8f9fa9f57d04072c7900a18782ec9b9. after waiting 0 ms 2023-07-18 02:14:58,856 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689646496175.b8f9fa9f57d04072c7900a18782ec9b9. 2023-07-18 02:14:58,865 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=52, resume processing ppid=46 2023-07-18 02:14:58,865 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=52, ppid=46, state=SUCCESS; CloseRegionProcedure 5b31c79b0c2dd00c2c5b23efa1c80b14, server=jenkins-hbase4.apache.org,39557,1689646489998 in 188 msec 2023-07-18 02:14:58,875 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed d26a05047c700cd40a14b5289e5087f2 2023-07-18 02:14:58,876 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=49 updating hbase:meta row=d26a05047c700cd40a14b5289e5087f2, regionState=CLOSED 2023-07-18 02:14:58,876 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689646496175.d26a05047c700cd40a14b5289e5087f2.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689646498876"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689646498876"}]},"ts":"1689646498876"} 2023-07-18 02:14:58,878 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=54, resume processing ppid=45 2023-07-18 02:14:58,878 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=54, ppid=45, state=SUCCESS; CloseRegionProcedure 9081579ad90c011736a6a20282632a80, server=jenkins-hbase4.apache.org,35063,1689646489808 in 216 msec 2023-07-18 02:14:58,878 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=46, ppid=44, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5b31c79b0c2dd00c2c5b23efa1c80b14, UNASSIGN in 241 msec 2023-07-18 02:14:58,881 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=45, ppid=44, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9081579ad90c011736a6a20282632a80, UNASSIGN in 254 msec 2023-07-18 02:14:58,882 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=53, resume processing ppid=49 2023-07-18 02:14:58,882 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=53, ppid=49, state=SUCCESS; CloseRegionProcedure d26a05047c700cd40a14b5289e5087f2, server=jenkins-hbase4.apache.org,39557,1689646489998 in 240 msec 2023-07-18 02:14:58,884 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=49, ppid=44, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d26a05047c700cd40a14b5289e5087f2, UNASSIGN in 258 msec 2023-07-18 02:14:58,886 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testTableMoveTruncateAndDrop/b8f9fa9f57d04072c7900a18782ec9b9/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-18 02:14:58,888 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689646496175.b8f9fa9f57d04072c7900a18782ec9b9. 2023-07-18 02:14:58,888 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for b8f9fa9f57d04072c7900a18782ec9b9: 2023-07-18 02:14:58,890 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed b8f9fa9f57d04072c7900a18782ec9b9 2023-07-18 02:14:58,891 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=48 updating hbase:meta row=b8f9fa9f57d04072c7900a18782ec9b9, regionState=CLOSED 2023-07-18 02:14:58,891 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689646496175.b8f9fa9f57d04072c7900a18782ec9b9.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689646498890"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689646498890"}]},"ts":"1689646498890"} 2023-07-18 02:14:58,898 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=50, resume processing ppid=48 2023-07-18 02:14:58,898 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=50, ppid=48, state=SUCCESS; CloseRegionProcedure b8f9fa9f57d04072c7900a18782ec9b9, server=jenkins-hbase4.apache.org,35063,1689646489808 in 261 msec 2023-07-18 02:14:58,902 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=48, resume processing ppid=44 2023-07-18 02:14:58,902 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=48, ppid=44, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b8f9fa9f57d04072c7900a18782ec9b9, UNASSIGN in 274 msec 2023-07-18 02:14:58,903 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689646498903"}]},"ts":"1689646498903"} 2023-07-18 02:14:58,905 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLED in hbase:meta 2023-07-18 02:14:58,907 INFO [PEWorker-5] procedure.DisableTableProcedure(305): Set Group_testTableMoveTruncateAndDrop to state=DISABLED 2023-07-18 02:14:58,911 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=44, state=SUCCESS; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop in 304 msec 2023-07-18 02:14:58,923 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(1230): Checking to see if procedure is done pid=44 2023-07-18 02:14:58,923 INFO [Listener at localhost/38101] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 44 completed 2023-07-18 02:14:58,925 INFO [Listener at localhost/38101] client.HBaseAdmin$13(770): Started truncating Group_testTableMoveTruncateAndDrop 2023-07-18 02:14:58,931 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.HMaster$6(2260): Client=jenkins//172.31.14.131 truncate Group_testTableMoveTruncateAndDrop 2023-07-18 02:14:58,939 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] procedure2.ProcedureExecutor(1029): Stored pid=55, state=RUNNABLE:TRUNCATE_TABLE_PRE_OPERATION; TruncateTableProcedure (table=Group_testTableMoveTruncateAndDrop preserveSplits=true) 2023-07-18 02:14:58,942 DEBUG [PEWorker-1] procedure.TruncateTableProcedure(87): waiting for 'Group_testTableMoveTruncateAndDrop' regions in transition 2023-07-18 02:14:58,944 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(1230): Checking to see if procedure is done pid=55 2023-07-18 02:14:58,957 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/5b31c79b0c2dd00c2c5b23efa1c80b14 2023-07-18 02:14:58,957 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9081579ad90c011736a6a20282632a80 2023-07-18 02:14:58,957 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d26a05047c700cd40a14b5289e5087f2 2023-07-18 02:14:58,957 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b8f9fa9f57d04072c7900a18782ec9b9 2023-07-18 02:14:58,957 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/55df0e4f2ce9a9ca3676c096f6b5defe 2023-07-18 02:14:58,962 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d26a05047c700cd40a14b5289e5087f2/f, FileablePath, hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d26a05047c700cd40a14b5289e5087f2/recovered.edits] 2023-07-18 02:14:58,962 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/5b31c79b0c2dd00c2c5b23efa1c80b14/f, FileablePath, hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/5b31c79b0c2dd00c2c5b23efa1c80b14/recovered.edits] 2023-07-18 02:14:58,963 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/55df0e4f2ce9a9ca3676c096f6b5defe/f, FileablePath, hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/55df0e4f2ce9a9ca3676c096f6b5defe/recovered.edits] 2023-07-18 02:14:58,965 DEBUG [HFileArchiver-6] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b8f9fa9f57d04072c7900a18782ec9b9/f, FileablePath, hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b8f9fa9f57d04072c7900a18782ec9b9/recovered.edits] 2023-07-18 02:14:58,967 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9081579ad90c011736a6a20282632a80/f, FileablePath, hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9081579ad90c011736a6a20282632a80/recovered.edits] 2023-07-18 02:14:58,983 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d26a05047c700cd40a14b5289e5087f2/recovered.edits/7.seqid to hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/archive/data/default/Group_testTableMoveTruncateAndDrop/d26a05047c700cd40a14b5289e5087f2/recovered.edits/7.seqid 2023-07-18 02:14:58,983 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/55df0e4f2ce9a9ca3676c096f6b5defe/recovered.edits/7.seqid to hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/archive/data/default/Group_testTableMoveTruncateAndDrop/55df0e4f2ce9a9ca3676c096f6b5defe/recovered.edits/7.seqid 2023-07-18 02:14:58,984 DEBUG [HFileArchiver-6] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b8f9fa9f57d04072c7900a18782ec9b9/recovered.edits/7.seqid to hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/archive/data/default/Group_testTableMoveTruncateAndDrop/b8f9fa9f57d04072c7900a18782ec9b9/recovered.edits/7.seqid 2023-07-18 02:14:58,985 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/5b31c79b0c2dd00c2c5b23efa1c80b14/recovered.edits/7.seqid to hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/archive/data/default/Group_testTableMoveTruncateAndDrop/5b31c79b0c2dd00c2c5b23efa1c80b14/recovered.edits/7.seqid 2023-07-18 02:14:58,985 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d26a05047c700cd40a14b5289e5087f2 2023-07-18 02:14:58,986 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/55df0e4f2ce9a9ca3676c096f6b5defe 2023-07-18 02:14:58,986 DEBUG [HFileArchiver-6] backup.HFileArchiver(596): Deleted hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b8f9fa9f57d04072c7900a18782ec9b9 2023-07-18 02:14:58,987 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/5b31c79b0c2dd00c2c5b23efa1c80b14 2023-07-18 02:14:58,988 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9081579ad90c011736a6a20282632a80/recovered.edits/7.seqid to hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/archive/data/default/Group_testTableMoveTruncateAndDrop/9081579ad90c011736a6a20282632a80/recovered.edits/7.seqid 2023-07-18 02:14:58,988 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9081579ad90c011736a6a20282632a80 2023-07-18 02:14:58,988 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-18 02:14:59,021 WARN [PEWorker-1] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testTableMoveTruncateAndDrop from hbase:meta 2023-07-18 02:14:59,025 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(421): Removing 'Group_testTableMoveTruncateAndDrop' descriptor. 2023-07-18 02:14:59,026 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(411): Removing 'Group_testTableMoveTruncateAndDrop' from region states. 2023-07-18 02:14:59,026 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,,1689646496175.9081579ad90c011736a6a20282632a80.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689646499026"}]},"ts":"9223372036854775807"} 2023-07-18 02:14:59,026 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689646496175.5b31c79b0c2dd00c2c5b23efa1c80b14.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689646499026"}]},"ts":"9223372036854775807"} 2023-07-18 02:14:59,027 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689646496175.55df0e4f2ce9a9ca3676c096f6b5defe.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689646499026"}]},"ts":"9223372036854775807"} 2023-07-18 02:14:59,027 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689646496175.b8f9fa9f57d04072c7900a18782ec9b9.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689646499026"}]},"ts":"9223372036854775807"} 2023-07-18 02:14:59,027 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689646496175.d26a05047c700cd40a14b5289e5087f2.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689646499026"}]},"ts":"9223372036854775807"} 2023-07-18 02:14:59,033 INFO [PEWorker-1] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-18 02:14:59,033 DEBUG [PEWorker-1] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 9081579ad90c011736a6a20282632a80, NAME => 'Group_testTableMoveTruncateAndDrop,,1689646496175.9081579ad90c011736a6a20282632a80.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => 5b31c79b0c2dd00c2c5b23efa1c80b14, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689646496175.5b31c79b0c2dd00c2c5b23efa1c80b14.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => 55df0e4f2ce9a9ca3676c096f6b5defe, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689646496175.55df0e4f2ce9a9ca3676c096f6b5defe.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => b8f9fa9f57d04072c7900a18782ec9b9, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689646496175.b8f9fa9f57d04072c7900a18782ec9b9.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => d26a05047c700cd40a14b5289e5087f2, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689646496175.d26a05047c700cd40a14b5289e5087f2.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-18 02:14:59,034 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(415): Marking 'Group_testTableMoveTruncateAndDrop' as deleted. 2023-07-18 02:14:59,034 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689646499034"}]},"ts":"9223372036854775807"} 2023-07-18 02:14:59,036 INFO [PEWorker-1] hbase.MetaTableAccessor(1658): Deleted table Group_testTableMoveTruncateAndDrop state from META 2023-07-18 02:14:59,045 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(1230): Checking to see if procedure is done pid=55 2023-07-18 02:14:59,046 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3f32bd80a40d50f9d865ba2256bbe77b 2023-07-18 02:14:59,046 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/301d325d606ec0716d674a2373e0ff71 2023-07-18 02:14:59,046 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/5ea0a4eac1db6889b1adab49179a107a 2023-07-18 02:14:59,046 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d6059538c6f9c0f135941a32b13e7fe8 2023-07-18 02:14:59,046 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/46440e006f92454df616c9641e555476 2023-07-18 02:14:59,047 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3f32bd80a40d50f9d865ba2256bbe77b empty. 2023-07-18 02:14:59,047 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/46440e006f92454df616c9641e555476 empty. 2023-07-18 02:14:59,048 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/5ea0a4eac1db6889b1adab49179a107a empty. 2023-07-18 02:14:59,047 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d6059538c6f9c0f135941a32b13e7fe8 empty. 2023-07-18 02:14:59,048 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/301d325d606ec0716d674a2373e0ff71 empty. 2023-07-18 02:14:59,048 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/5ea0a4eac1db6889b1adab49179a107a 2023-07-18 02:14:59,048 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/46440e006f92454df616c9641e555476 2023-07-18 02:14:59,049 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3f32bd80a40d50f9d865ba2256bbe77b 2023-07-18 02:14:59,049 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/301d325d606ec0716d674a2373e0ff71 2023-07-18 02:14:59,049 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d6059538c6f9c0f135941a32b13e7fe8 2023-07-18 02:14:59,049 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-18 02:14:59,092 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/.tabledesc/.tableinfo.0000000001 2023-07-18 02:14:59,095 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 3f32bd80a40d50f9d865ba2256bbe77b, NAME => 'Group_testTableMoveTruncateAndDrop,,1689646498991.3f32bd80a40d50f9d865ba2256bbe77b.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp 2023-07-18 02:14:59,096 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => d6059538c6f9c0f135941a32b13e7fe8, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689646498991.d6059538c6f9c0f135941a32b13e7fe8.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp 2023-07-18 02:14:59,096 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => 5ea0a4eac1db6889b1adab49179a107a, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689646498991.5ea0a4eac1db6889b1adab49179a107a.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp 2023-07-18 02:14:59,161 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689646498991.d6059538c6f9c0f135941a32b13e7fe8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 02:14:59,161 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689646498991.3f32bd80a40d50f9d865ba2256bbe77b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 02:14:59,161 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing d6059538c6f9c0f135941a32b13e7fe8, disabling compactions & flushes 2023-07-18 02:14:59,161 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 3f32bd80a40d50f9d865ba2256bbe77b, disabling compactions & flushes 2023-07-18 02:14:59,162 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689646498991.d6059538c6f9c0f135941a32b13e7fe8. 2023-07-18 02:14:59,162 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689646498991.3f32bd80a40d50f9d865ba2256bbe77b. 2023-07-18 02:14:59,162 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689646498991.d6059538c6f9c0f135941a32b13e7fe8. 2023-07-18 02:14:59,162 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689646498991.3f32bd80a40d50f9d865ba2256bbe77b. 2023-07-18 02:14:59,162 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689646498991.d6059538c6f9c0f135941a32b13e7fe8. after waiting 0 ms 2023-07-18 02:14:59,162 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689646498991.3f32bd80a40d50f9d865ba2256bbe77b. after waiting 0 ms 2023-07-18 02:14:59,162 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689646498991.d6059538c6f9c0f135941a32b13e7fe8. 2023-07-18 02:14:59,162 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689646498991.d6059538c6f9c0f135941a32b13e7fe8. 2023-07-18 02:14:59,162 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for d6059538c6f9c0f135941a32b13e7fe8: 2023-07-18 02:14:59,162 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689646498991.3f32bd80a40d50f9d865ba2256bbe77b. 2023-07-18 02:14:59,162 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689646498991.3f32bd80a40d50f9d865ba2256bbe77b. 2023-07-18 02:14:59,162 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 3f32bd80a40d50f9d865ba2256bbe77b: 2023-07-18 02:14:59,163 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 46440e006f92454df616c9641e555476, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689646498991.46440e006f92454df616c9641e555476.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp 2023-07-18 02:14:59,163 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 301d325d606ec0716d674a2373e0ff71, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689646498991.301d325d606ec0716d674a2373e0ff71.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp 2023-07-18 02:14:59,164 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689646498991.5ea0a4eac1db6889b1adab49179a107a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 02:14:59,164 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing 5ea0a4eac1db6889b1adab49179a107a, disabling compactions & flushes 2023-07-18 02:14:59,164 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689646498991.5ea0a4eac1db6889b1adab49179a107a. 2023-07-18 02:14:59,164 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689646498991.5ea0a4eac1db6889b1adab49179a107a. 2023-07-18 02:14:59,164 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689646498991.5ea0a4eac1db6889b1adab49179a107a. after waiting 0 ms 2023-07-18 02:14:59,164 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689646498991.5ea0a4eac1db6889b1adab49179a107a. 2023-07-18 02:14:59,164 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689646498991.5ea0a4eac1db6889b1adab49179a107a. 2023-07-18 02:14:59,164 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for 5ea0a4eac1db6889b1adab49179a107a: 2023-07-18 02:14:59,193 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689646498991.46440e006f92454df616c9641e555476.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 02:14:59,193 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 46440e006f92454df616c9641e555476, disabling compactions & flushes 2023-07-18 02:14:59,193 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689646498991.46440e006f92454df616c9641e555476. 2023-07-18 02:14:59,193 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689646498991.46440e006f92454df616c9641e555476. 2023-07-18 02:14:59,193 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689646498991.46440e006f92454df616c9641e555476. after waiting 0 ms 2023-07-18 02:14:59,193 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689646498991.46440e006f92454df616c9641e555476. 2023-07-18 02:14:59,193 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689646498991.46440e006f92454df616c9641e555476. 2023-07-18 02:14:59,193 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 46440e006f92454df616c9641e555476: 2023-07-18 02:14:59,197 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689646498991.301d325d606ec0716d674a2373e0ff71.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 02:14:59,197 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 301d325d606ec0716d674a2373e0ff71, disabling compactions & flushes 2023-07-18 02:14:59,197 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689646498991.301d325d606ec0716d674a2373e0ff71. 2023-07-18 02:14:59,197 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689646498991.301d325d606ec0716d674a2373e0ff71. 2023-07-18 02:14:59,197 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689646498991.301d325d606ec0716d674a2373e0ff71. after waiting 0 ms 2023-07-18 02:14:59,197 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689646498991.301d325d606ec0716d674a2373e0ff71. 2023-07-18 02:14:59,197 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689646498991.301d325d606ec0716d674a2373e0ff71. 2023-07-18 02:14:59,197 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 301d325d606ec0716d674a2373e0ff71: 2023-07-18 02:14:59,202 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689646498991.d6059538c6f9c0f135941a32b13e7fe8.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689646499202"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689646499202"}]},"ts":"1689646499202"} 2023-07-18 02:14:59,202 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689646498991.3f32bd80a40d50f9d865ba2256bbe77b.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689646499202"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689646499202"}]},"ts":"1689646499202"} 2023-07-18 02:14:59,202 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689646498991.5ea0a4eac1db6889b1adab49179a107a.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689646499202"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689646499202"}]},"ts":"1689646499202"} 2023-07-18 02:14:59,202 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689646498991.46440e006f92454df616c9641e555476.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689646499202"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689646499202"}]},"ts":"1689646499202"} 2023-07-18 02:14:59,202 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689646498991.301d325d606ec0716d674a2373e0ff71.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689646499202"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689646499202"}]},"ts":"1689646499202"} 2023-07-18 02:14:59,206 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-18 02:14:59,208 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689646499208"}]},"ts":"1689646499208"} 2023-07-18 02:14:59,210 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLING in hbase:meta 2023-07-18 02:14:59,216 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 02:14:59,216 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 02:14:59,216 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 02:14:59,216 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 02:14:59,217 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=56, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3f32bd80a40d50f9d865ba2256bbe77b, ASSIGN}, {pid=57, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d6059538c6f9c0f135941a32b13e7fe8, ASSIGN}, {pid=58, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5ea0a4eac1db6889b1adab49179a107a, ASSIGN}, {pid=59, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=301d325d606ec0716d674a2373e0ff71, ASSIGN}, {pid=60, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=46440e006f92454df616c9641e555476, ASSIGN}] 2023-07-18 02:14:59,220 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=58, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5ea0a4eac1db6889b1adab49179a107a, ASSIGN 2023-07-18 02:14:59,220 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=56, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3f32bd80a40d50f9d865ba2256bbe77b, ASSIGN 2023-07-18 02:14:59,220 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=60, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=46440e006f92454df616c9641e555476, ASSIGN 2023-07-18 02:14:59,221 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=59, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=301d325d606ec0716d674a2373e0ff71, ASSIGN 2023-07-18 02:14:59,221 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=57, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d6059538c6f9c0f135941a32b13e7fe8, ASSIGN 2023-07-18 02:14:59,222 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=58, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5ea0a4eac1db6889b1adab49179a107a, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,39557,1689646489998; forceNewPlan=false, retain=false 2023-07-18 02:14:59,222 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=60, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=46440e006f92454df616c9641e555476, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,35063,1689646489808; forceNewPlan=false, retain=false 2023-07-18 02:14:59,222 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=56, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3f32bd80a40d50f9d865ba2256bbe77b, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,35063,1689646489808; forceNewPlan=false, retain=false 2023-07-18 02:14:59,223 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=57, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d6059538c6f9c0f135941a32b13e7fe8, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,39557,1689646489998; forceNewPlan=false, retain=false 2023-07-18 02:14:59,223 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=59, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=301d325d606ec0716d674a2373e0ff71, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,39557,1689646489998; forceNewPlan=false, retain=false 2023-07-18 02:14:59,247 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(1230): Checking to see if procedure is done pid=55 2023-07-18 02:14:59,372 INFO [jenkins-hbase4:40909] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-18 02:14:59,376 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=60 updating hbase:meta row=46440e006f92454df616c9641e555476, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,35063,1689646489808 2023-07-18 02:14:59,376 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=57 updating hbase:meta row=d6059538c6f9c0f135941a32b13e7fe8, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39557,1689646489998 2023-07-18 02:14:59,376 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689646498991.46440e006f92454df616c9641e555476.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689646499376"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689646499376"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689646499376"}]},"ts":"1689646499376"} 2023-07-18 02:14:59,376 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=58 updating hbase:meta row=5ea0a4eac1db6889b1adab49179a107a, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39557,1689646489998 2023-07-18 02:14:59,376 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689646498991.d6059538c6f9c0f135941a32b13e7fe8.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689646499376"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689646499376"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689646499376"}]},"ts":"1689646499376"} 2023-07-18 02:14:59,376 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689646498991.5ea0a4eac1db6889b1adab49179a107a.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689646499376"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689646499376"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689646499376"}]},"ts":"1689646499376"} 2023-07-18 02:14:59,376 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=56 updating hbase:meta row=3f32bd80a40d50f9d865ba2256bbe77b, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,35063,1689646489808 2023-07-18 02:14:59,376 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=59 updating hbase:meta row=301d325d606ec0716d674a2373e0ff71, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39557,1689646489998 2023-07-18 02:14:59,376 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689646498991.3f32bd80a40d50f9d865ba2256bbe77b.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689646499376"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689646499376"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689646499376"}]},"ts":"1689646499376"} 2023-07-18 02:14:59,377 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689646498991.301d325d606ec0716d674a2373e0ff71.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689646499376"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689646499376"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689646499376"}]},"ts":"1689646499376"} 2023-07-18 02:14:59,379 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=61, ppid=60, state=RUNNABLE; OpenRegionProcedure 46440e006f92454df616c9641e555476, server=jenkins-hbase4.apache.org,35063,1689646489808}] 2023-07-18 02:14:59,381 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=62, ppid=57, state=RUNNABLE; OpenRegionProcedure d6059538c6f9c0f135941a32b13e7fe8, server=jenkins-hbase4.apache.org,39557,1689646489998}] 2023-07-18 02:14:59,383 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=63, ppid=58, state=RUNNABLE; OpenRegionProcedure 5ea0a4eac1db6889b1adab49179a107a, server=jenkins-hbase4.apache.org,39557,1689646489998}] 2023-07-18 02:14:59,386 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=64, ppid=56, state=RUNNABLE; OpenRegionProcedure 3f32bd80a40d50f9d865ba2256bbe77b, server=jenkins-hbase4.apache.org,35063,1689646489808}] 2023-07-18 02:14:59,389 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=65, ppid=59, state=RUNNABLE; OpenRegionProcedure 301d325d606ec0716d674a2373e0ff71, server=jenkins-hbase4.apache.org,39557,1689646489998}] 2023-07-18 02:14:59,538 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689646498991.3f32bd80a40d50f9d865ba2256bbe77b. 2023-07-18 02:14:59,538 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 3f32bd80a40d50f9d865ba2256bbe77b, NAME => 'Group_testTableMoveTruncateAndDrop,,1689646498991.3f32bd80a40d50f9d865ba2256bbe77b.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-18 02:14:59,538 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689646498991.301d325d606ec0716d674a2373e0ff71. 2023-07-18 02:14:59,538 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 3f32bd80a40d50f9d865ba2256bbe77b 2023-07-18 02:14:59,538 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 301d325d606ec0716d674a2373e0ff71, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689646498991.301d325d606ec0716d674a2373e0ff71.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-18 02:14:59,538 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689646498991.3f32bd80a40d50f9d865ba2256bbe77b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 02:14:59,538 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 3f32bd80a40d50f9d865ba2256bbe77b 2023-07-18 02:14:59,538 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 3f32bd80a40d50f9d865ba2256bbe77b 2023-07-18 02:14:59,538 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 301d325d606ec0716d674a2373e0ff71 2023-07-18 02:14:59,539 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689646498991.301d325d606ec0716d674a2373e0ff71.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 02:14:59,539 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 301d325d606ec0716d674a2373e0ff71 2023-07-18 02:14:59,539 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 301d325d606ec0716d674a2373e0ff71 2023-07-18 02:14:59,540 INFO [StoreOpener-3f32bd80a40d50f9d865ba2256bbe77b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 3f32bd80a40d50f9d865ba2256bbe77b 2023-07-18 02:14:59,540 INFO [StoreOpener-301d325d606ec0716d674a2373e0ff71-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 301d325d606ec0716d674a2373e0ff71 2023-07-18 02:14:59,542 DEBUG [StoreOpener-3f32bd80a40d50f9d865ba2256bbe77b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testTableMoveTruncateAndDrop/3f32bd80a40d50f9d865ba2256bbe77b/f 2023-07-18 02:14:59,542 DEBUG [StoreOpener-3f32bd80a40d50f9d865ba2256bbe77b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testTableMoveTruncateAndDrop/3f32bd80a40d50f9d865ba2256bbe77b/f 2023-07-18 02:14:59,542 DEBUG [StoreOpener-301d325d606ec0716d674a2373e0ff71-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testTableMoveTruncateAndDrop/301d325d606ec0716d674a2373e0ff71/f 2023-07-18 02:14:59,542 DEBUG [StoreOpener-301d325d606ec0716d674a2373e0ff71-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testTableMoveTruncateAndDrop/301d325d606ec0716d674a2373e0ff71/f 2023-07-18 02:14:59,542 INFO [StoreOpener-3f32bd80a40d50f9d865ba2256bbe77b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 3f32bd80a40d50f9d865ba2256bbe77b columnFamilyName f 2023-07-18 02:14:59,542 INFO [StoreOpener-301d325d606ec0716d674a2373e0ff71-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 301d325d606ec0716d674a2373e0ff71 columnFamilyName f 2023-07-18 02:14:59,543 INFO [StoreOpener-3f32bd80a40d50f9d865ba2256bbe77b-1] regionserver.HStore(310): Store=3f32bd80a40d50f9d865ba2256bbe77b/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 02:14:59,543 INFO [StoreOpener-301d325d606ec0716d674a2373e0ff71-1] regionserver.HStore(310): Store=301d325d606ec0716d674a2373e0ff71/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 02:14:59,544 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testTableMoveTruncateAndDrop/3f32bd80a40d50f9d865ba2256bbe77b 2023-07-18 02:14:59,544 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testTableMoveTruncateAndDrop/301d325d606ec0716d674a2373e0ff71 2023-07-18 02:14:59,544 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testTableMoveTruncateAndDrop/3f32bd80a40d50f9d865ba2256bbe77b 2023-07-18 02:14:59,545 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testTableMoveTruncateAndDrop/301d325d606ec0716d674a2373e0ff71 2023-07-18 02:14:59,548 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 3f32bd80a40d50f9d865ba2256bbe77b 2023-07-18 02:14:59,549 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(1230): Checking to see if procedure is done pid=55 2023-07-18 02:14:59,549 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 301d325d606ec0716d674a2373e0ff71 2023-07-18 02:14:59,551 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testTableMoveTruncateAndDrop/3f32bd80a40d50f9d865ba2256bbe77b/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 02:14:59,552 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 3f32bd80a40d50f9d865ba2256bbe77b; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11766371200, jitterRate=0.09582871198654175}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 02:14:59,552 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 3f32bd80a40d50f9d865ba2256bbe77b: 2023-07-18 02:14:59,553 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689646498991.3f32bd80a40d50f9d865ba2256bbe77b., pid=64, masterSystemTime=1689646499533 2023-07-18 02:14:59,553 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testTableMoveTruncateAndDrop/301d325d606ec0716d674a2373e0ff71/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 02:14:59,554 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 301d325d606ec0716d674a2373e0ff71; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10234116480, jitterRate=-0.046873629093170166}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 02:14:59,554 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 301d325d606ec0716d674a2373e0ff71: 2023-07-18 02:14:59,555 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689646498991.3f32bd80a40d50f9d865ba2256bbe77b. 2023-07-18 02:14:59,555 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689646498991.3f32bd80a40d50f9d865ba2256bbe77b. 2023-07-18 02:14:59,555 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689646498991.46440e006f92454df616c9641e555476. 2023-07-18 02:14:59,555 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 46440e006f92454df616c9641e555476, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689646498991.46440e006f92454df616c9641e555476.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-18 02:14:59,555 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689646498991.301d325d606ec0716d674a2373e0ff71., pid=65, masterSystemTime=1689646499534 2023-07-18 02:14:59,555 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 46440e006f92454df616c9641e555476 2023-07-18 02:14:59,555 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689646498991.46440e006f92454df616c9641e555476.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 02:14:59,556 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 46440e006f92454df616c9641e555476 2023-07-18 02:14:59,556 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 46440e006f92454df616c9641e555476 2023-07-18 02:14:59,556 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=56 updating hbase:meta row=3f32bd80a40d50f9d865ba2256bbe77b, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,35063,1689646489808 2023-07-18 02:14:59,556 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689646498991.3f32bd80a40d50f9d865ba2256bbe77b.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689646499556"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689646499556"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689646499556"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689646499556"}]},"ts":"1689646499556"} 2023-07-18 02:14:59,557 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689646498991.301d325d606ec0716d674a2373e0ff71. 2023-07-18 02:14:59,557 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689646498991.301d325d606ec0716d674a2373e0ff71. 2023-07-18 02:14:59,557 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689646498991.5ea0a4eac1db6889b1adab49179a107a. 2023-07-18 02:14:59,557 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 5ea0a4eac1db6889b1adab49179a107a, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689646498991.5ea0a4eac1db6889b1adab49179a107a.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-18 02:14:59,558 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 5ea0a4eac1db6889b1adab49179a107a 2023-07-18 02:14:59,558 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689646498991.5ea0a4eac1db6889b1adab49179a107a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 02:14:59,558 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 5ea0a4eac1db6889b1adab49179a107a 2023-07-18 02:14:59,558 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 5ea0a4eac1db6889b1adab49179a107a 2023-07-18 02:14:59,558 INFO [StoreOpener-46440e006f92454df616c9641e555476-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 46440e006f92454df616c9641e555476 2023-07-18 02:14:59,558 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=59 updating hbase:meta row=301d325d606ec0716d674a2373e0ff71, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,39557,1689646489998 2023-07-18 02:14:59,559 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689646498991.301d325d606ec0716d674a2373e0ff71.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689646499558"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689646499558"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689646499558"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689646499558"}]},"ts":"1689646499558"} 2023-07-18 02:14:59,561 INFO [StoreOpener-5ea0a4eac1db6889b1adab49179a107a-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 5ea0a4eac1db6889b1adab49179a107a 2023-07-18 02:14:59,562 DEBUG [StoreOpener-46440e006f92454df616c9641e555476-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testTableMoveTruncateAndDrop/46440e006f92454df616c9641e555476/f 2023-07-18 02:14:59,562 DEBUG [StoreOpener-46440e006f92454df616c9641e555476-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testTableMoveTruncateAndDrop/46440e006f92454df616c9641e555476/f 2023-07-18 02:14:59,563 INFO [StoreOpener-46440e006f92454df616c9641e555476-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 46440e006f92454df616c9641e555476 columnFamilyName f 2023-07-18 02:14:59,564 DEBUG [StoreOpener-5ea0a4eac1db6889b1adab49179a107a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testTableMoveTruncateAndDrop/5ea0a4eac1db6889b1adab49179a107a/f 2023-07-18 02:14:59,564 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=64, resume processing ppid=56 2023-07-18 02:14:59,567 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=64, ppid=56, state=SUCCESS; OpenRegionProcedure 3f32bd80a40d50f9d865ba2256bbe77b, server=jenkins-hbase4.apache.org,35063,1689646489808 in 172 msec 2023-07-18 02:14:59,567 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=65, resume processing ppid=59 2023-07-18 02:14:59,567 DEBUG [StoreOpener-5ea0a4eac1db6889b1adab49179a107a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testTableMoveTruncateAndDrop/5ea0a4eac1db6889b1adab49179a107a/f 2023-07-18 02:14:59,567 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=65, ppid=59, state=SUCCESS; OpenRegionProcedure 301d325d606ec0716d674a2373e0ff71, server=jenkins-hbase4.apache.org,39557,1689646489998 in 175 msec 2023-07-18 02:14:59,567 INFO [StoreOpener-46440e006f92454df616c9641e555476-1] regionserver.HStore(310): Store=46440e006f92454df616c9641e555476/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 02:14:59,567 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=56, ppid=55, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3f32bd80a40d50f9d865ba2256bbe77b, ASSIGN in 347 msec 2023-07-18 02:14:59,568 INFO [StoreOpener-5ea0a4eac1db6889b1adab49179a107a-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 5ea0a4eac1db6889b1adab49179a107a columnFamilyName f 2023-07-18 02:14:59,568 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testTableMoveTruncateAndDrop/46440e006f92454df616c9641e555476 2023-07-18 02:14:59,568 INFO [StoreOpener-5ea0a4eac1db6889b1adab49179a107a-1] regionserver.HStore(310): Store=5ea0a4eac1db6889b1adab49179a107a/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 02:14:59,569 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=59, ppid=55, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=301d325d606ec0716d674a2373e0ff71, ASSIGN in 350 msec 2023-07-18 02:14:59,572 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testTableMoveTruncateAndDrop/5ea0a4eac1db6889b1adab49179a107a 2023-07-18 02:14:59,572 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testTableMoveTruncateAndDrop/5ea0a4eac1db6889b1adab49179a107a 2023-07-18 02:14:59,574 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testTableMoveTruncateAndDrop/46440e006f92454df616c9641e555476 2023-07-18 02:14:59,579 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 46440e006f92454df616c9641e555476 2023-07-18 02:14:59,582 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testTableMoveTruncateAndDrop/46440e006f92454df616c9641e555476/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 02:14:59,582 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 46440e006f92454df616c9641e555476; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9950572320, jitterRate=-0.07328073680400848}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 02:14:59,582 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 46440e006f92454df616c9641e555476: 2023-07-18 02:14:59,583 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 5ea0a4eac1db6889b1adab49179a107a 2023-07-18 02:14:59,583 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689646498991.46440e006f92454df616c9641e555476., pid=61, masterSystemTime=1689646499533 2023-07-18 02:14:59,587 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=60 updating hbase:meta row=46440e006f92454df616c9641e555476, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,35063,1689646489808 2023-07-18 02:14:59,587 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689646498991.46440e006f92454df616c9641e555476.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689646499586"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689646499586"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689646499586"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689646499586"}]},"ts":"1689646499586"} 2023-07-18 02:14:59,587 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689646498991.46440e006f92454df616c9641e555476. 2023-07-18 02:14:59,587 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689646498991.46440e006f92454df616c9641e555476. 2023-07-18 02:14:59,593 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=61, resume processing ppid=60 2023-07-18 02:14:59,593 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=61, ppid=60, state=SUCCESS; OpenRegionProcedure 46440e006f92454df616c9641e555476, server=jenkins-hbase4.apache.org,35063,1689646489808 in 210 msec 2023-07-18 02:14:59,594 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=60, ppid=55, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=46440e006f92454df616c9641e555476, ASSIGN in 376 msec 2023-07-18 02:14:59,600 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testTableMoveTruncateAndDrop/5ea0a4eac1db6889b1adab49179a107a/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 02:14:59,601 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 5ea0a4eac1db6889b1adab49179a107a; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9992507360, jitterRate=-0.06937523186206818}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 02:14:59,601 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 5ea0a4eac1db6889b1adab49179a107a: 2023-07-18 02:14:59,607 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689646498991.5ea0a4eac1db6889b1adab49179a107a., pid=63, masterSystemTime=1689646499534 2023-07-18 02:14:59,618 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=58 updating hbase:meta row=5ea0a4eac1db6889b1adab49179a107a, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,39557,1689646489998 2023-07-18 02:14:59,619 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689646498991.5ea0a4eac1db6889b1adab49179a107a.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689646499618"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689646499618"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689646499618"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689646499618"}]},"ts":"1689646499618"} 2023-07-18 02:14:59,619 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689646498991.5ea0a4eac1db6889b1adab49179a107a. 2023-07-18 02:14:59,619 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689646498991.5ea0a4eac1db6889b1adab49179a107a. 2023-07-18 02:14:59,620 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689646498991.d6059538c6f9c0f135941a32b13e7fe8. 2023-07-18 02:14:59,620 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => d6059538c6f9c0f135941a32b13e7fe8, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689646498991.d6059538c6f9c0f135941a32b13e7fe8.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-18 02:14:59,620 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop d6059538c6f9c0f135941a32b13e7fe8 2023-07-18 02:14:59,620 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689646498991.d6059538c6f9c0f135941a32b13e7fe8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 02:14:59,620 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for d6059538c6f9c0f135941a32b13e7fe8 2023-07-18 02:14:59,620 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for d6059538c6f9c0f135941a32b13e7fe8 2023-07-18 02:14:59,627 INFO [StoreOpener-d6059538c6f9c0f135941a32b13e7fe8-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region d6059538c6f9c0f135941a32b13e7fe8 2023-07-18 02:14:59,628 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=63, resume processing ppid=58 2023-07-18 02:14:59,628 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=63, ppid=58, state=SUCCESS; OpenRegionProcedure 5ea0a4eac1db6889b1adab49179a107a, server=jenkins-hbase4.apache.org,39557,1689646489998 in 239 msec 2023-07-18 02:14:59,630 DEBUG [StoreOpener-d6059538c6f9c0f135941a32b13e7fe8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testTableMoveTruncateAndDrop/d6059538c6f9c0f135941a32b13e7fe8/f 2023-07-18 02:14:59,630 DEBUG [StoreOpener-d6059538c6f9c0f135941a32b13e7fe8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testTableMoveTruncateAndDrop/d6059538c6f9c0f135941a32b13e7fe8/f 2023-07-18 02:14:59,631 INFO [StoreOpener-d6059538c6f9c0f135941a32b13e7fe8-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region d6059538c6f9c0f135941a32b13e7fe8 columnFamilyName f 2023-07-18 02:14:59,632 INFO [StoreOpener-d6059538c6f9c0f135941a32b13e7fe8-1] regionserver.HStore(310): Store=d6059538c6f9c0f135941a32b13e7fe8/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 02:14:59,636 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testTableMoveTruncateAndDrop/d6059538c6f9c0f135941a32b13e7fe8 2023-07-18 02:14:59,636 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=58, ppid=55, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5ea0a4eac1db6889b1adab49179a107a, ASSIGN in 411 msec 2023-07-18 02:14:59,637 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testTableMoveTruncateAndDrop/d6059538c6f9c0f135941a32b13e7fe8 2023-07-18 02:14:59,641 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for d6059538c6f9c0f135941a32b13e7fe8 2023-07-18 02:14:59,647 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testTableMoveTruncateAndDrop/d6059538c6f9c0f135941a32b13e7fe8/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 02:14:59,648 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened d6059538c6f9c0f135941a32b13e7fe8; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=12043857280, jitterRate=0.12167161703109741}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 02:14:59,648 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for d6059538c6f9c0f135941a32b13e7fe8: 2023-07-18 02:14:59,649 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689646498991.d6059538c6f9c0f135941a32b13e7fe8., pid=62, masterSystemTime=1689646499534 2023-07-18 02:14:59,651 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689646498991.d6059538c6f9c0f135941a32b13e7fe8. 2023-07-18 02:14:59,651 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689646498991.d6059538c6f9c0f135941a32b13e7fe8. 2023-07-18 02:14:59,651 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=57 updating hbase:meta row=d6059538c6f9c0f135941a32b13e7fe8, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,39557,1689646489998 2023-07-18 02:14:59,652 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689646498991.d6059538c6f9c0f135941a32b13e7fe8.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689646499651"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689646499651"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689646499651"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689646499651"}]},"ts":"1689646499651"} 2023-07-18 02:14:59,656 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=62, resume processing ppid=57 2023-07-18 02:14:59,656 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=62, ppid=57, state=SUCCESS; OpenRegionProcedure d6059538c6f9c0f135941a32b13e7fe8, server=jenkins-hbase4.apache.org,39557,1689646489998 in 273 msec 2023-07-18 02:14:59,660 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=57, resume processing ppid=55 2023-07-18 02:14:59,660 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=57, ppid=55, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d6059538c6f9c0f135941a32b13e7fe8, ASSIGN in 440 msec 2023-07-18 02:14:59,660 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689646499660"}]},"ts":"1689646499660"} 2023-07-18 02:14:59,663 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLED in hbase:meta 2023-07-18 02:14:59,665 DEBUG [PEWorker-3] procedure.TruncateTableProcedure(145): truncate 'Group_testTableMoveTruncateAndDrop' completed 2023-07-18 02:14:59,667 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=55, state=SUCCESS; TruncateTableProcedure (table=Group_testTableMoveTruncateAndDrop preserveSplits=true) in 732 msec 2023-07-18 02:15:00,050 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(1230): Checking to see if procedure is done pid=55 2023-07-18 02:15:00,051 INFO [Listener at localhost/38101] client.HBaseAdmin$TableFuture(3541): Operation: TRUNCATE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 55 completed 2023-07-18 02:15:00,053 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_1141100661 2023-07-18 02:15:00,053 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 02:15:00,054 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_1141100661 2023-07-18 02:15:00,055 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 02:15:00,056 INFO [Listener at localhost/38101] client.HBaseAdmin$15(890): Started disable of Group_testTableMoveTruncateAndDrop 2023-07-18 02:15:00,056 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testTableMoveTruncateAndDrop 2023-07-18 02:15:00,058 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] procedure2.ProcedureExecutor(1029): Stored pid=66, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-18 02:15:00,061 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(1230): Checking to see if procedure is done pid=66 2023-07-18 02:15:00,062 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689646500062"}]},"ts":"1689646500062"} 2023-07-18 02:15:00,063 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLING in hbase:meta 2023-07-18 02:15:00,065 INFO [PEWorker-2] procedure.DisableTableProcedure(293): Set Group_testTableMoveTruncateAndDrop to state=DISABLING 2023-07-18 02:15:00,066 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=67, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3f32bd80a40d50f9d865ba2256bbe77b, UNASSIGN}, {pid=68, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d6059538c6f9c0f135941a32b13e7fe8, UNASSIGN}, {pid=69, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5ea0a4eac1db6889b1adab49179a107a, UNASSIGN}, {pid=70, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=301d325d606ec0716d674a2373e0ff71, UNASSIGN}, {pid=71, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=46440e006f92454df616c9641e555476, UNASSIGN}] 2023-07-18 02:15:00,070 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=71, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=46440e006f92454df616c9641e555476, UNASSIGN 2023-07-18 02:15:00,072 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=70, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=301d325d606ec0716d674a2373e0ff71, UNASSIGN 2023-07-18 02:15:00,072 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=69, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5ea0a4eac1db6889b1adab49179a107a, UNASSIGN 2023-07-18 02:15:00,072 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=68, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d6059538c6f9c0f135941a32b13e7fe8, UNASSIGN 2023-07-18 02:15:00,072 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=67, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3f32bd80a40d50f9d865ba2256bbe77b, UNASSIGN 2023-07-18 02:15:00,073 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=71 updating hbase:meta row=46440e006f92454df616c9641e555476, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,35063,1689646489808 2023-07-18 02:15:00,073 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689646498991.46440e006f92454df616c9641e555476.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689646500073"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689646500073"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689646500073"}]},"ts":"1689646500073"} 2023-07-18 02:15:00,073 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=70 updating hbase:meta row=301d325d606ec0716d674a2373e0ff71, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,39557,1689646489998 2023-07-18 02:15:00,073 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689646498991.301d325d606ec0716d674a2373e0ff71.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689646500073"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689646500073"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689646500073"}]},"ts":"1689646500073"} 2023-07-18 02:15:00,074 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=68 updating hbase:meta row=d6059538c6f9c0f135941a32b13e7fe8, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,39557,1689646489998 2023-07-18 02:15:00,074 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=69 updating hbase:meta row=5ea0a4eac1db6889b1adab49179a107a, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,39557,1689646489998 2023-07-18 02:15:00,074 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689646498991.d6059538c6f9c0f135941a32b13e7fe8.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689646500074"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689646500074"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689646500074"}]},"ts":"1689646500074"} 2023-07-18 02:15:00,074 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689646498991.5ea0a4eac1db6889b1adab49179a107a.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689646500074"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689646500074"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689646500074"}]},"ts":"1689646500074"} 2023-07-18 02:15:00,076 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=67 updating hbase:meta row=3f32bd80a40d50f9d865ba2256bbe77b, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,35063,1689646489808 2023-07-18 02:15:00,076 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689646498991.3f32bd80a40d50f9d865ba2256bbe77b.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689646500076"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689646500076"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689646500076"}]},"ts":"1689646500076"} 2023-07-18 02:15:00,088 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=72, ppid=71, state=RUNNABLE; CloseRegionProcedure 46440e006f92454df616c9641e555476, server=jenkins-hbase4.apache.org,35063,1689646489808}] 2023-07-18 02:15:00,091 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=73, ppid=70, state=RUNNABLE; CloseRegionProcedure 301d325d606ec0716d674a2373e0ff71, server=jenkins-hbase4.apache.org,39557,1689646489998}] 2023-07-18 02:15:00,092 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=74, ppid=68, state=RUNNABLE; CloseRegionProcedure d6059538c6f9c0f135941a32b13e7fe8, server=jenkins-hbase4.apache.org,39557,1689646489998}] 2023-07-18 02:15:00,093 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=75, ppid=69, state=RUNNABLE; CloseRegionProcedure 5ea0a4eac1db6889b1adab49179a107a, server=jenkins-hbase4.apache.org,39557,1689646489998}] 2023-07-18 02:15:00,094 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=76, ppid=67, state=RUNNABLE; CloseRegionProcedure 3f32bd80a40d50f9d865ba2256bbe77b, server=jenkins-hbase4.apache.org,35063,1689646489808}] 2023-07-18 02:15:00,163 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(1230): Checking to see if procedure is done pid=66 2023-07-18 02:15:00,243 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 3f32bd80a40d50f9d865ba2256bbe77b 2023-07-18 02:15:00,245 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close d6059538c6f9c0f135941a32b13e7fe8 2023-07-18 02:15:00,248 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 3f32bd80a40d50f9d865ba2256bbe77b, disabling compactions & flushes 2023-07-18 02:15:00,248 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing d6059538c6f9c0f135941a32b13e7fe8, disabling compactions & flushes 2023-07-18 02:15:00,248 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689646498991.3f32bd80a40d50f9d865ba2256bbe77b. 2023-07-18 02:15:00,248 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689646498991.d6059538c6f9c0f135941a32b13e7fe8. 2023-07-18 02:15:00,248 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689646498991.3f32bd80a40d50f9d865ba2256bbe77b. 2023-07-18 02:15:00,248 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689646498991.d6059538c6f9c0f135941a32b13e7fe8. 2023-07-18 02:15:00,248 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689646498991.3f32bd80a40d50f9d865ba2256bbe77b. after waiting 0 ms 2023-07-18 02:15:00,248 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689646498991.d6059538c6f9c0f135941a32b13e7fe8. after waiting 0 ms 2023-07-18 02:15:00,248 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689646498991.3f32bd80a40d50f9d865ba2256bbe77b. 2023-07-18 02:15:00,248 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689646498991.d6059538c6f9c0f135941a32b13e7fe8. 2023-07-18 02:15:00,254 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testTableMoveTruncateAndDrop/d6059538c6f9c0f135941a32b13e7fe8/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 02:15:00,254 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689646498991.d6059538c6f9c0f135941a32b13e7fe8. 2023-07-18 02:15:00,254 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for d6059538c6f9c0f135941a32b13e7fe8: 2023-07-18 02:15:00,257 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed d6059538c6f9c0f135941a32b13e7fe8 2023-07-18 02:15:00,257 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 301d325d606ec0716d674a2373e0ff71 2023-07-18 02:15:00,258 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 301d325d606ec0716d674a2373e0ff71, disabling compactions & flushes 2023-07-18 02:15:00,258 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689646498991.301d325d606ec0716d674a2373e0ff71. 2023-07-18 02:15:00,258 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689646498991.301d325d606ec0716d674a2373e0ff71. 2023-07-18 02:15:00,258 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689646498991.301d325d606ec0716d674a2373e0ff71. after waiting 0 ms 2023-07-18 02:15:00,258 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689646498991.301d325d606ec0716d674a2373e0ff71. 2023-07-18 02:15:00,259 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=68 updating hbase:meta row=d6059538c6f9c0f135941a32b13e7fe8, regionState=CLOSED 2023-07-18 02:15:00,259 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689646498991.d6059538c6f9c0f135941a32b13e7fe8.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689646500259"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689646500259"}]},"ts":"1689646500259"} 2023-07-18 02:15:00,262 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testTableMoveTruncateAndDrop/3f32bd80a40d50f9d865ba2256bbe77b/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 02:15:00,263 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testTableMoveTruncateAndDrop/301d325d606ec0716d674a2373e0ff71/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 02:15:00,263 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689646498991.3f32bd80a40d50f9d865ba2256bbe77b. 2023-07-18 02:15:00,263 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 3f32bd80a40d50f9d865ba2256bbe77b: 2023-07-18 02:15:00,263 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689646498991.301d325d606ec0716d674a2373e0ff71. 2023-07-18 02:15:00,263 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 301d325d606ec0716d674a2373e0ff71: 2023-07-18 02:15:00,268 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 3f32bd80a40d50f9d865ba2256bbe77b 2023-07-18 02:15:00,268 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 46440e006f92454df616c9641e555476 2023-07-18 02:15:00,269 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 46440e006f92454df616c9641e555476, disabling compactions & flushes 2023-07-18 02:15:00,269 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689646498991.46440e006f92454df616c9641e555476. 2023-07-18 02:15:00,270 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=67 updating hbase:meta row=3f32bd80a40d50f9d865ba2256bbe77b, regionState=CLOSED 2023-07-18 02:15:00,270 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689646498991.46440e006f92454df616c9641e555476. 2023-07-18 02:15:00,270 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689646498991.46440e006f92454df616c9641e555476. after waiting 0 ms 2023-07-18 02:15:00,270 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689646498991.46440e006f92454df616c9641e555476. 2023-07-18 02:15:00,271 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 301d325d606ec0716d674a2373e0ff71 2023-07-18 02:15:00,271 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 5ea0a4eac1db6889b1adab49179a107a 2023-07-18 02:15:00,270 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689646498991.3f32bd80a40d50f9d865ba2256bbe77b.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689646500270"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689646500270"}]},"ts":"1689646500270"} 2023-07-18 02:15:00,272 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 5ea0a4eac1db6889b1adab49179a107a, disabling compactions & flushes 2023-07-18 02:15:00,272 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689646498991.5ea0a4eac1db6889b1adab49179a107a. 2023-07-18 02:15:00,272 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689646498991.5ea0a4eac1db6889b1adab49179a107a. 2023-07-18 02:15:00,272 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689646498991.5ea0a4eac1db6889b1adab49179a107a. after waiting 0 ms 2023-07-18 02:15:00,272 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689646498991.5ea0a4eac1db6889b1adab49179a107a. 2023-07-18 02:15:00,273 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=70 updating hbase:meta row=301d325d606ec0716d674a2373e0ff71, regionState=CLOSED 2023-07-18 02:15:00,273 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689646498991.301d325d606ec0716d674a2373e0ff71.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689646500273"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689646500273"}]},"ts":"1689646500273"} 2023-07-18 02:15:00,274 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=74, resume processing ppid=68 2023-07-18 02:15:00,275 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=74, ppid=68, state=SUCCESS; CloseRegionProcedure d6059538c6f9c0f135941a32b13e7fe8, server=jenkins-hbase4.apache.org,39557,1689646489998 in 176 msec 2023-07-18 02:15:00,277 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=68, ppid=66, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=d6059538c6f9c0f135941a32b13e7fe8, UNASSIGN in 209 msec 2023-07-18 02:15:00,279 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=76, resume processing ppid=67 2023-07-18 02:15:00,279 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=76, ppid=67, state=SUCCESS; CloseRegionProcedure 3f32bd80a40d50f9d865ba2256bbe77b, server=jenkins-hbase4.apache.org,35063,1689646489808 in 183 msec 2023-07-18 02:15:00,284 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=73, resume processing ppid=70 2023-07-18 02:15:00,284 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=73, ppid=70, state=SUCCESS; CloseRegionProcedure 301d325d606ec0716d674a2373e0ff71, server=jenkins-hbase4.apache.org,39557,1689646489998 in 187 msec 2023-07-18 02:15:00,284 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=67, ppid=66, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3f32bd80a40d50f9d865ba2256bbe77b, UNASSIGN in 213 msec 2023-07-18 02:15:00,286 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=70, ppid=66, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=301d325d606ec0716d674a2373e0ff71, UNASSIGN in 218 msec 2023-07-18 02:15:00,286 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testTableMoveTruncateAndDrop/46440e006f92454df616c9641e555476/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 02:15:00,287 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testTableMoveTruncateAndDrop/5ea0a4eac1db6889b1adab49179a107a/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 02:15:00,287 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689646498991.46440e006f92454df616c9641e555476. 2023-07-18 02:15:00,287 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 46440e006f92454df616c9641e555476: 2023-07-18 02:15:00,288 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689646498991.5ea0a4eac1db6889b1adab49179a107a. 2023-07-18 02:15:00,288 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 5ea0a4eac1db6889b1adab49179a107a: 2023-07-18 02:15:00,289 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 46440e006f92454df616c9641e555476 2023-07-18 02:15:00,290 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=71 updating hbase:meta row=46440e006f92454df616c9641e555476, regionState=CLOSED 2023-07-18 02:15:00,290 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689646498991.46440e006f92454df616c9641e555476.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689646500290"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689646500290"}]},"ts":"1689646500290"} 2023-07-18 02:15:00,291 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 5ea0a4eac1db6889b1adab49179a107a 2023-07-18 02:15:00,292 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=69 updating hbase:meta row=5ea0a4eac1db6889b1adab49179a107a, regionState=CLOSED 2023-07-18 02:15:00,292 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689646498991.5ea0a4eac1db6889b1adab49179a107a.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689646500292"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689646500292"}]},"ts":"1689646500292"} 2023-07-18 02:15:00,295 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=72, resume processing ppid=71 2023-07-18 02:15:00,295 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=72, ppid=71, state=SUCCESS; CloseRegionProcedure 46440e006f92454df616c9641e555476, server=jenkins-hbase4.apache.org,35063,1689646489808 in 204 msec 2023-07-18 02:15:00,296 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=75, resume processing ppid=69 2023-07-18 02:15:00,297 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=71, ppid=66, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=46440e006f92454df616c9641e555476, UNASSIGN in 229 msec 2023-07-18 02:15:00,297 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=75, ppid=69, state=SUCCESS; CloseRegionProcedure 5ea0a4eac1db6889b1adab49179a107a, server=jenkins-hbase4.apache.org,39557,1689646489998 in 201 msec 2023-07-18 02:15:00,299 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=69, resume processing ppid=66 2023-07-18 02:15:00,299 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=69, ppid=66, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5ea0a4eac1db6889b1adab49179a107a, UNASSIGN in 231 msec 2023-07-18 02:15:00,300 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689646500300"}]},"ts":"1689646500300"} 2023-07-18 02:15:00,301 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLED in hbase:meta 2023-07-18 02:15:00,303 INFO [PEWorker-4] procedure.DisableTableProcedure(305): Set Group_testTableMoveTruncateAndDrop to state=DISABLED 2023-07-18 02:15:00,306 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=66, state=SUCCESS; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop in 247 msec 2023-07-18 02:15:00,365 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(1230): Checking to see if procedure is done pid=66 2023-07-18 02:15:00,365 INFO [Listener at localhost/38101] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 66 completed 2023-07-18 02:15:00,371 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testTableMoveTruncateAndDrop 2023-07-18 02:15:00,379 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] procedure2.ProcedureExecutor(1029): Stored pid=77, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-18 02:15:00,382 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=77, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-18 02:15:00,382 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testTableMoveTruncateAndDrop' from rsgroup 'Group_testTableMoveTruncateAndDrop_1141100661' 2023-07-18 02:15:00,383 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=77, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-18 02:15:00,385 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:00,386 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1141100661 2023-07-18 02:15:00,387 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 02:15:00,387 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 02:15:00,398 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(1230): Checking to see if procedure is done pid=77 2023-07-18 02:15:00,400 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3f32bd80a40d50f9d865ba2256bbe77b 2023-07-18 02:15:00,400 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d6059538c6f9c0f135941a32b13e7fe8 2023-07-18 02:15:00,400 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/5ea0a4eac1db6889b1adab49179a107a 2023-07-18 02:15:00,400 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/301d325d606ec0716d674a2373e0ff71 2023-07-18 02:15:00,400 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/46440e006f92454df616c9641e555476 2023-07-18 02:15:00,405 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/5ea0a4eac1db6889b1adab49179a107a/f, FileablePath, hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/5ea0a4eac1db6889b1adab49179a107a/recovered.edits] 2023-07-18 02:15:00,405 DEBUG [HFileArchiver-6] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3f32bd80a40d50f9d865ba2256bbe77b/f, FileablePath, hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3f32bd80a40d50f9d865ba2256bbe77b/recovered.edits] 2023-07-18 02:15:00,405 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/46440e006f92454df616c9641e555476/f, FileablePath, hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/46440e006f92454df616c9641e555476/recovered.edits] 2023-07-18 02:15:00,406 DEBUG [HFileArchiver-7] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/301d325d606ec0716d674a2373e0ff71/f, FileablePath, hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/301d325d606ec0716d674a2373e0ff71/recovered.edits] 2023-07-18 02:15:00,406 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d6059538c6f9c0f135941a32b13e7fe8/f, FileablePath, hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d6059538c6f9c0f135941a32b13e7fe8/recovered.edits] 2023-07-18 02:15:00,421 DEBUG [HFileArchiver-6] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3f32bd80a40d50f9d865ba2256bbe77b/recovered.edits/4.seqid to hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/archive/data/default/Group_testTableMoveTruncateAndDrop/3f32bd80a40d50f9d865ba2256bbe77b/recovered.edits/4.seqid 2023-07-18 02:15:00,421 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/5ea0a4eac1db6889b1adab49179a107a/recovered.edits/4.seqid to hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/archive/data/default/Group_testTableMoveTruncateAndDrop/5ea0a4eac1db6889b1adab49179a107a/recovered.edits/4.seqid 2023-07-18 02:15:00,423 DEBUG [HFileArchiver-7] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/301d325d606ec0716d674a2373e0ff71/recovered.edits/4.seqid to hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/archive/data/default/Group_testTableMoveTruncateAndDrop/301d325d606ec0716d674a2373e0ff71/recovered.edits/4.seqid 2023-07-18 02:15:00,423 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d6059538c6f9c0f135941a32b13e7fe8/recovered.edits/4.seqid to hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/archive/data/default/Group_testTableMoveTruncateAndDrop/d6059538c6f9c0f135941a32b13e7fe8/recovered.edits/4.seqid 2023-07-18 02:15:00,423 DEBUG [HFileArchiver-6] backup.HFileArchiver(596): Deleted hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3f32bd80a40d50f9d865ba2256bbe77b 2023-07-18 02:15:00,423 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/46440e006f92454df616c9641e555476/recovered.edits/4.seqid to hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/archive/data/default/Group_testTableMoveTruncateAndDrop/46440e006f92454df616c9641e555476/recovered.edits/4.seqid 2023-07-18 02:15:00,423 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/5ea0a4eac1db6889b1adab49179a107a 2023-07-18 02:15:00,424 DEBUG [HFileArchiver-7] backup.HFileArchiver(596): Deleted hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/301d325d606ec0716d674a2373e0ff71 2023-07-18 02:15:00,424 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/46440e006f92454df616c9641e555476 2023-07-18 02:15:00,424 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testTableMoveTruncateAndDrop/d6059538c6f9c0f135941a32b13e7fe8 2023-07-18 02:15:00,425 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-18 02:15:00,428 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=77, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-18 02:15:00,437 WARN [PEWorker-2] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testTableMoveTruncateAndDrop from hbase:meta 2023-07-18 02:15:00,440 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(421): Removing 'Group_testTableMoveTruncateAndDrop' descriptor. 2023-07-18 02:15:00,441 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=77, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-18 02:15:00,442 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(411): Removing 'Group_testTableMoveTruncateAndDrop' from region states. 2023-07-18 02:15:00,442 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,,1689646498991.3f32bd80a40d50f9d865ba2256bbe77b.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689646500442"}]},"ts":"9223372036854775807"} 2023-07-18 02:15:00,442 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689646498991.d6059538c6f9c0f135941a32b13e7fe8.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689646500442"}]},"ts":"9223372036854775807"} 2023-07-18 02:15:00,442 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689646498991.5ea0a4eac1db6889b1adab49179a107a.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689646500442"}]},"ts":"9223372036854775807"} 2023-07-18 02:15:00,442 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689646498991.301d325d606ec0716d674a2373e0ff71.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689646500442"}]},"ts":"9223372036854775807"} 2023-07-18 02:15:00,442 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689646498991.46440e006f92454df616c9641e555476.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689646500442"}]},"ts":"9223372036854775807"} 2023-07-18 02:15:00,444 INFO [PEWorker-2] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-18 02:15:00,445 DEBUG [PEWorker-2] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 3f32bd80a40d50f9d865ba2256bbe77b, NAME => 'Group_testTableMoveTruncateAndDrop,,1689646498991.3f32bd80a40d50f9d865ba2256bbe77b.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => d6059538c6f9c0f135941a32b13e7fe8, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689646498991.d6059538c6f9c0f135941a32b13e7fe8.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => 5ea0a4eac1db6889b1adab49179a107a, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689646498991.5ea0a4eac1db6889b1adab49179a107a.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => 301d325d606ec0716d674a2373e0ff71, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689646498991.301d325d606ec0716d674a2373e0ff71.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => 46440e006f92454df616c9641e555476, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689646498991.46440e006f92454df616c9641e555476.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-18 02:15:00,445 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(415): Marking 'Group_testTableMoveTruncateAndDrop' as deleted. 2023-07-18 02:15:00,445 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689646500445"}]},"ts":"9223372036854775807"} 2023-07-18 02:15:00,447 INFO [PEWorker-2] hbase.MetaTableAccessor(1658): Deleted table Group_testTableMoveTruncateAndDrop state from META 2023-07-18 02:15:00,449 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(130): Finished pid=77, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-18 02:15:00,451 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=77, state=SUCCESS; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop in 77 msec 2023-07-18 02:15:00,500 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(1230): Checking to see if procedure is done pid=77 2023-07-18 02:15:00,501 INFO [Listener at localhost/38101] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 77 completed 2023-07-18 02:15:00,503 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_1141100661 2023-07-18 02:15:00,503 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 02:15:00,507 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=35063] ipc.CallRunner(144): callId: 159 service: ClientService methodName: Scan size: 147 connection: 172.31.14.131:60920 deadline: 1689646560507, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=43645 startCode=1689646493716. As of locationSeqNum=6. 2023-07-18 02:15:00,621 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:00,621 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:00,623 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 02:15:00,623 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 02:15:00,623 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 02:15:00,624 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:35063, jenkins-hbase4.apache.org:39557] to rsgroup default 2023-07-18 02:15:00,628 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:00,629 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1141100661 2023-07-18 02:15:00,629 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 02:15:00,630 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 02:15:00,635 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testTableMoveTruncateAndDrop_1141100661, current retry=0 2023-07-18 02:15:00,635 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,35063,1689646489808, jenkins-hbase4.apache.org,39557,1689646489998] are moved back to Group_testTableMoveTruncateAndDrop_1141100661 2023-07-18 02:15:00,635 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testTableMoveTruncateAndDrop_1141100661 => default 2023-07-18 02:15:00,635 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 02:15:00,641 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testTableMoveTruncateAndDrop_1141100661 2023-07-18 02:15:00,647 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:00,647 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 02:15:00,648 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-18 02:15:00,650 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 02:15:00,652 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 02:15:00,652 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 02:15:00,652 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 02:15:00,654 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 02:15:00,654 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 02:15:00,655 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 02:15:00,660 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:00,661 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 02:15:00,662 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 02:15:00,667 INFO [Listener at localhost/38101] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 02:15:00,668 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 02:15:00,671 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:00,671 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 02:15:00,675 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 02:15:00,676 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 02:15:00,680 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:00,680 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:00,683 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40909] to rsgroup master 2023-07-18 02:15:00,683 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40909 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 02:15:00,684 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] ipc.CallRunner(144): callId: 147 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:39122 deadline: 1689647700683, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40909 is either offline or it does not exist. 2023-07-18 02:15:00,684 WARN [Listener at localhost/38101] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40909 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40909 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 02:15:00,686 INFO [Listener at localhost/38101] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 02:15:00,687 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:00,687 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:00,687 INFO [Listener at localhost/38101] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35063, jenkins-hbase4.apache.org:39557, jenkins-hbase4.apache.org:43645, jenkins-hbase4.apache.org:45077], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 02:15:00,688 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 02:15:00,688 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 02:15:00,719 INFO [Listener at localhost/38101] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testTableMoveTruncateAndDrop Thread=504 (was 424) Potentially hanging thread: HFileArchiver-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1294118745_17 at /127.0.0.1:55156 [Receiving block BP-566210079-172.31.14.131-1689646483854:blk_1073741843_1019] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1294118745_17 at /127.0.0.1:55654 [Receiving block BP-566210079-172.31.14.131-1689646483854:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-972952586_17 at /127.0.0.1:55682 [Waiting for operation #10] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1496928480-645 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1294118745_17 at /127.0.0.1:55696 [Receiving block BP-566210079-172.31.14.131-1689646483854:blk_1073741843_1019] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-3 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-566210079-172.31.14.131-1689646483854:blk_1073741843_1019, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1294118745_17 at /127.0.0.1:55126 [Receiving block BP-566210079-172.31.14.131-1689646483854:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (699281955) connection to localhost/127.0.0.1:45101 from jenkins.hfs.3 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-4 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=43645 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-9 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:43645-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43645 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7-prefix:jenkins-hbase4.apache.org,43645,1689646493716.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1294118745_17 at /127.0.0.1:54186 [Receiving block BP-566210079-172.31.14.131-1689646483854:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-7 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x7c96b44e-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=43645 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54439@0x1afe09e6-SendThread(127.0.0.1:54439) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1397003324_17 at /127.0.0.1:54228 [Waiting for operation #6] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=43645 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7-prefix:jenkins-hbase4.apache.org,43645,1689646493716 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1496928480-640 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-566210079-172.31.14.131-1689646483854:blk_1073741843_1019, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2e79eb29-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=43645 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=43645 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1496928480-641 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x7c96b44e-shared-pool-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1496928480-644 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:43645Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54439@0x1afe09e6-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RS-EventLoopGroup-7-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1496928480-642 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-566210079-172.31.14.131-1689646483854:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1496928480-639-acceptor-0@705126d4-ServerConnector@53ac63a1{HTTP/1.1, (http/1.1)}{0.0.0.0:35389} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43645 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-566210079-172.31.14.131-1689646483854:blk_1073741843_1019, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.3@localhost:45101 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54439@0x1afe09e6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/602809530.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-6 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x7c96b44e-shared-pool-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=43645 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1496928480-643 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-a749b0e-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1496928480-638 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/2071299855.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-566210079-172.31.14.131-1689646483854:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=43645 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x7c96b44e-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1294118745_17 at /127.0.0.1:54208 [Receiving block BP-566210079-172.31.14.131-1689646483854:blk_1073741843_1019] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2e79eb29-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:43645 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-8 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-972952586_17 at /127.0.0.1:55238 [Waiting for operation #5] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-5 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-566210079-172.31.14.131-1689646483854:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=43645 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x7c96b44e-shared-pool-12 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_META-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer for 'HBase' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x7c96b44e-shared-pool-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=802 (was 681) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=434 (was 411) - SystemLoadAverage LEAK? -, ProcessCount=172 (was 172), AvailableMemoryMB=2893 (was 3436) 2023-07-18 02:15:00,722 WARN [Listener at localhost/38101] hbase.ResourceChecker(130): Thread=504 is superior to 500 2023-07-18 02:15:00,738 INFO [Listener at localhost/38101] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testValidGroupNames Thread=504, OpenFileDescriptor=802, MaxFileDescriptor=60000, SystemLoadAverage=434, ProcessCount=172, AvailableMemoryMB=2892 2023-07-18 02:15:00,738 WARN [Listener at localhost/38101] hbase.ResourceChecker(130): Thread=504 is superior to 500 2023-07-18 02:15:00,738 INFO [Listener at localhost/38101] rsgroup.TestRSGroupsBase(132): testValidGroupNames 2023-07-18 02:15:00,746 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:00,747 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:00,748 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 02:15:00,748 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 02:15:00,748 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 02:15:00,749 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 02:15:00,749 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 02:15:00,750 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 02:15:00,755 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:00,755 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 02:15:00,757 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 02:15:00,761 INFO [Listener at localhost/38101] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 02:15:00,762 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 02:15:00,764 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:00,764 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 02:15:00,766 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 02:15:00,769 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 02:15:00,772 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:00,772 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:00,775 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40909] to rsgroup master 2023-07-18 02:15:00,775 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40909 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 02:15:00,775 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] ipc.CallRunner(144): callId: 175 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:39122 deadline: 1689647700775, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40909 is either offline or it does not exist. 2023-07-18 02:15:00,776 WARN [Listener at localhost/38101] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40909 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40909 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 02:15:00,777 INFO [Listener at localhost/38101] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 02:15:00,778 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:00,778 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:00,778 INFO [Listener at localhost/38101] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35063, jenkins-hbase4.apache.org:39557, jenkins-hbase4.apache.org:43645, jenkins-hbase4.apache.org:45077], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 02:15:00,779 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 02:15:00,779 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 02:15:00,781 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo* 2023-07-18 02:15:00,781 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 02:15:00,781 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] ipc.CallRunner(144): callId: 181 service: MasterService methodName: ExecMasterService size: 83 connection: 172.31.14.131:39122 deadline: 1689647700780, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-18 02:15:00,782 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo@ 2023-07-18 02:15:00,782 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 02:15:00,782 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] ipc.CallRunner(144): callId: 183 service: MasterService methodName: ExecMasterService size: 83 connection: 172.31.14.131:39122 deadline: 1689647700782, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-18 02:15:00,783 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup - 2023-07-18 02:15:00,783 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 02:15:00,784 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] ipc.CallRunner(144): callId: 185 service: MasterService methodName: ExecMasterService size: 80 connection: 172.31.14.131:39122 deadline: 1689647700783, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-18 02:15:00,785 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo_123 2023-07-18 02:15:00,788 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/foo_123 2023-07-18 02:15:00,790 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:00,790 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 02:15:00,791 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 02:15:00,792 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 02:15:00,796 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:00,796 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:00,802 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:00,802 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:00,803 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 02:15:00,803 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 02:15:00,803 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 02:15:00,805 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 02:15:00,805 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 02:15:00,806 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup foo_123 2023-07-18 02:15:00,809 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:00,810 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 02:15:00,810 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-18 02:15:00,812 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 02:15:00,813 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 02:15:00,814 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 02:15:00,814 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 02:15:00,815 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 02:15:00,815 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 02:15:00,816 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 02:15:00,821 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:00,821 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 02:15:00,826 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 02:15:00,830 INFO [Listener at localhost/38101] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 02:15:00,831 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 02:15:00,833 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:00,834 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 02:15:00,835 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 02:15:00,836 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 02:15:00,845 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:00,845 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:00,854 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40909] to rsgroup master 2023-07-18 02:15:00,854 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40909 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 02:15:00,854 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] ipc.CallRunner(144): callId: 219 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:39122 deadline: 1689647700854, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40909 is either offline or it does not exist. 2023-07-18 02:15:00,855 WARN [Listener at localhost/38101] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40909 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40909 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 02:15:00,856 INFO [Listener at localhost/38101] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 02:15:00,857 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:00,858 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:00,860 INFO [Listener at localhost/38101] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35063, jenkins-hbase4.apache.org:39557, jenkins-hbase4.apache.org:43645, jenkins-hbase4.apache.org:45077], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 02:15:00,861 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 02:15:00,861 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 02:15:00,891 INFO [Listener at localhost/38101] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testValidGroupNames Thread=507 (was 504) Potentially hanging thread: hconnection-0x2e79eb29-shared-pool-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2e79eb29-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2e79eb29-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=802 (was 802), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=434 (was 434), ProcessCount=172 (was 172), AvailableMemoryMB=2893 (was 2892) - AvailableMemoryMB LEAK? - 2023-07-18 02:15:00,891 WARN [Listener at localhost/38101] hbase.ResourceChecker(130): Thread=507 is superior to 500 2023-07-18 02:15:00,914 INFO [Listener at localhost/38101] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testFailRemoveGroup Thread=507, OpenFileDescriptor=802, MaxFileDescriptor=60000, SystemLoadAverage=434, ProcessCount=172, AvailableMemoryMB=2892 2023-07-18 02:15:00,914 WARN [Listener at localhost/38101] hbase.ResourceChecker(130): Thread=507 is superior to 500 2023-07-18 02:15:00,914 INFO [Listener at localhost/38101] rsgroup.TestRSGroupsBase(132): testFailRemoveGroup 2023-07-18 02:15:00,920 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:00,920 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:00,922 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 02:15:00,922 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 02:15:00,922 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 02:15:00,923 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 02:15:00,923 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 02:15:00,924 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 02:15:00,929 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:00,929 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 02:15:00,931 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 02:15:00,935 INFO [Listener at localhost/38101] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 02:15:00,935 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 02:15:00,938 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:00,939 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 02:15:00,940 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 02:15:00,943 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 02:15:00,946 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:00,946 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:00,949 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40909] to rsgroup master 2023-07-18 02:15:00,949 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40909 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 02:15:00,950 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] ipc.CallRunner(144): callId: 247 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:39122 deadline: 1689647700949, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40909 is either offline or it does not exist. 2023-07-18 02:15:00,950 WARN [Listener at localhost/38101] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40909 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40909 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 02:15:00,952 INFO [Listener at localhost/38101] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 02:15:00,953 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:00,953 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:00,953 INFO [Listener at localhost/38101] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35063, jenkins-hbase4.apache.org:39557, jenkins-hbase4.apache.org:43645, jenkins-hbase4.apache.org:45077], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 02:15:00,954 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 02:15:00,954 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 02:15:00,955 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:00,955 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:00,956 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 02:15:00,957 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 02:15:00,958 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup bar 2023-07-18 02:15:00,960 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:00,960 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-18 02:15:00,962 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 02:15:00,962 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 02:15:00,964 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 02:15:00,966 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:00,967 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:00,970 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:35063, jenkins-hbase4.apache.org:39557, jenkins-hbase4.apache.org:43645] to rsgroup bar 2023-07-18 02:15:00,972 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:00,972 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-18 02:15:00,973 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 02:15:00,973 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 02:15:00,975 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(238): Moving server region fbc284aeb66f3eaca0bb2d67e73a56a3, which do not belong to RSGroup bar 2023-07-18 02:15:00,976 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] procedure2.ProcedureExecutor(1029): Stored pid=78, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=fbc284aeb66f3eaca0bb2d67e73a56a3, REOPEN/MOVE 2023-07-18 02:15:00,976 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(238): Moving server region 1588230740, which do not belong to RSGroup bar 2023-07-18 02:15:00,977 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=78, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=fbc284aeb66f3eaca0bb2d67e73a56a3, REOPEN/MOVE 2023-07-18 02:15:00,978 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] procedure2.ProcedureExecutor(1029): Stored pid=79, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-18 02:15:00,978 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(286): Moving 2 region(s) to group default, current retry=0 2023-07-18 02:15:00,978 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=78 updating hbase:meta row=fbc284aeb66f3eaca0bb2d67e73a56a3, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43645,1689646493716 2023-07-18 02:15:00,981 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=79, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-18 02:15:00,981 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689646492720.fbc284aeb66f3eaca0bb2d67e73a56a3.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689646500978"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689646500978"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689646500978"}]},"ts":"1689646500978"} 2023-07-18 02:15:00,983 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,43645,1689646493716, state=CLOSING 2023-07-18 02:15:00,985 DEBUG [Listener at localhost/38101-EventThread] zookeeper.ZKWatcher(600): master:40909-0x1017635d76e0000, quorum=127.0.0.1:54439, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-18 02:15:00,986 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-18 02:15:00,986 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=80, ppid=79, state=RUNNABLE; CloseRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,43645,1689646493716}] 2023-07-18 02:15:00,990 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=81, ppid=78, state=RUNNABLE; CloseRegionProcedure fbc284aeb66f3eaca0bb2d67e73a56a3, server=jenkins-hbase4.apache.org,43645,1689646493716}] 2023-07-18 02:15:00,994 DEBUG [PEWorker-4] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=81, ppid=78, state=RUNNABLE; CloseRegionProcedure fbc284aeb66f3eaca0bb2d67e73a56a3, server=jenkins-hbase4.apache.org,43645,1689646493716 2023-07-18 02:15:01,140 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 1588230740 2023-07-18 02:15:01,141 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-18 02:15:01,141 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-18 02:15:01,141 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-18 02:15:01,141 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-18 02:15:01,141 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-18 02:15:01,142 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=39.13 KB heapSize=60.16 KB 2023-07-18 02:15:01,160 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=36.25 KB at sequenceid=101 (bloomFilter=false), to=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/meta/1588230740/.tmp/info/ac295947ba814a6aabef794e3634a1fa 2023-07-18 02:15:01,166 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for ac295947ba814a6aabef794e3634a1fa 2023-07-18 02:15:01,184 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.15 KB at sequenceid=101 (bloomFilter=false), to=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/meta/1588230740/.tmp/rep_barrier/5581c78f77ef497eb9d7c805b6436954 2023-07-18 02:15:01,190 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 5581c78f77ef497eb9d7c805b6436954 2023-07-18 02:15:01,205 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.73 KB at sequenceid=101 (bloomFilter=false), to=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/meta/1588230740/.tmp/table/57f6e2e49ff645a1b913c3bdf2b13025 2023-07-18 02:15:01,211 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 57f6e2e49ff645a1b913c3bdf2b13025 2023-07-18 02:15:01,213 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/meta/1588230740/.tmp/info/ac295947ba814a6aabef794e3634a1fa as hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/meta/1588230740/info/ac295947ba814a6aabef794e3634a1fa 2023-07-18 02:15:01,219 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for ac295947ba814a6aabef794e3634a1fa 2023-07-18 02:15:01,219 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/meta/1588230740/info/ac295947ba814a6aabef794e3634a1fa, entries=31, sequenceid=101, filesize=8.4 K 2023-07-18 02:15:01,221 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/meta/1588230740/.tmp/rep_barrier/5581c78f77ef497eb9d7c805b6436954 as hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/meta/1588230740/rep_barrier/5581c78f77ef497eb9d7c805b6436954 2023-07-18 02:15:01,229 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 5581c78f77ef497eb9d7c805b6436954 2023-07-18 02:15:01,229 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/meta/1588230740/rep_barrier/5581c78f77ef497eb9d7c805b6436954, entries=10, sequenceid=101, filesize=6.1 K 2023-07-18 02:15:01,230 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/meta/1588230740/.tmp/table/57f6e2e49ff645a1b913c3bdf2b13025 as hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/meta/1588230740/table/57f6e2e49ff645a1b913c3bdf2b13025 2023-07-18 02:15:01,237 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 57f6e2e49ff645a1b913c3bdf2b13025 2023-07-18 02:15:01,237 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/meta/1588230740/table/57f6e2e49ff645a1b913c3bdf2b13025, entries=11, sequenceid=101, filesize=6.0 K 2023-07-18 02:15:01,239 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~39.13 KB/40071, heapSize ~60.11 KB/61552, currentSize=0 B/0 for 1588230740 in 98ms, sequenceid=101, compaction requested=false 2023-07-18 02:15:01,253 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/meta/1588230740/recovered.edits/104.seqid, newMaxSeqId=104, maxSeqId=18 2023-07-18 02:15:01,257 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-18 02:15:01,258 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-18 02:15:01,258 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-18 02:15:01,258 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 1588230740 move to jenkins-hbase4.apache.org,45077,1689646489555 record at close sequenceid=101 2023-07-18 02:15:01,260 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 1588230740 2023-07-18 02:15:01,261 WARN [PEWorker-2] zookeeper.MetaTableLocator(225): Tried to set null ServerName in hbase:meta; skipping -- ServerName required 2023-07-18 02:15:01,263 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=80, resume processing ppid=79 2023-07-18 02:15:01,263 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=80, ppid=79, state=SUCCESS; CloseRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,43645,1689646493716 in 275 msec 2023-07-18 02:15:01,263 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=79, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,45077,1689646489555; forceNewPlan=false, retain=false 2023-07-18 02:15:01,414 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,45077,1689646489555, state=OPENING 2023-07-18 02:15:01,415 DEBUG [Listener at localhost/38101-EventThread] zookeeper.ZKWatcher(600): master:40909-0x1017635d76e0000, quorum=127.0.0.1:54439, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-18 02:15:01,416 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-18 02:15:01,416 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=82, ppid=79, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,45077,1689646489555}] 2023-07-18 02:15:01,572 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-18 02:15:01,572 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 02:15:01,574 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C45077%2C1689646489555.meta, suffix=.meta, logDir=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/WALs/jenkins-hbase4.apache.org,45077,1689646489555, archiveDir=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/oldWALs, maxLogs=32 2023-07-18 02:15:01,590 DEBUG [RS-EventLoopGroup-7-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34885,DS-afa6b23c-0172-447d-8546-c0b8f662d95b,DISK] 2023-07-18 02:15:01,591 DEBUG [RS-EventLoopGroup-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33339,DS-bef9494d-281f-4e87-b04c-fe86fdcfb4dc,DISK] 2023-07-18 02:15:01,592 DEBUG [RS-EventLoopGroup-7-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38365,DS-9de188ed-4aa0-40e3-be2d-fc8641659521,DISK] 2023-07-18 02:15:01,594 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/WALs/jenkins-hbase4.apache.org,45077,1689646489555/jenkins-hbase4.apache.org%2C45077%2C1689646489555.meta.1689646501575.meta 2023-07-18 02:15:01,595 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34885,DS-afa6b23c-0172-447d-8546-c0b8f662d95b,DISK], DatanodeInfoWithStorage[127.0.0.1:33339,DS-bef9494d-281f-4e87-b04c-fe86fdcfb4dc,DISK], DatanodeInfoWithStorage[127.0.0.1:38365,DS-9de188ed-4aa0-40e3-be2d-fc8641659521,DISK]] 2023-07-18 02:15:01,595 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-18 02:15:01,595 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-18 02:15:01,595 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-18 02:15:01,595 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-18 02:15:01,595 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-18 02:15:01,595 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 02:15:01,595 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-18 02:15:01,595 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-18 02:15:01,597 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-18 02:15:01,598 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/meta/1588230740/info 2023-07-18 02:15:01,598 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/meta/1588230740/info 2023-07-18 02:15:01,598 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-18 02:15:01,607 INFO [StoreFileOpener-info-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for ac295947ba814a6aabef794e3634a1fa 2023-07-18 02:15:01,607 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/meta/1588230740/info/ac295947ba814a6aabef794e3634a1fa 2023-07-18 02:15:01,613 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/meta/1588230740/info/f6975690ee324060b18de846d256e046 2023-07-18 02:15:01,613 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 02:15:01,613 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-18 02:15:01,614 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/meta/1588230740/rep_barrier 2023-07-18 02:15:01,614 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/meta/1588230740/rep_barrier 2023-07-18 02:15:01,615 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-18 02:15:01,621 INFO [StoreFileOpener-rep_barrier-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 5581c78f77ef497eb9d7c805b6436954 2023-07-18 02:15:01,622 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/meta/1588230740/rep_barrier/5581c78f77ef497eb9d7c805b6436954 2023-07-18 02:15:01,622 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 02:15:01,622 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-18 02:15:01,623 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/meta/1588230740/table 2023-07-18 02:15:01,623 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/meta/1588230740/table 2023-07-18 02:15:01,623 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-18 02:15:01,630 INFO [StoreFileOpener-table-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 57f6e2e49ff645a1b913c3bdf2b13025 2023-07-18 02:15:01,631 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/meta/1588230740/table/57f6e2e49ff645a1b913c3bdf2b13025 2023-07-18 02:15:01,636 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/meta/1588230740/table/8aaa79afa8164b0582eb69bd2cec2d06 2023-07-18 02:15:01,636 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 02:15:01,637 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/meta/1588230740 2023-07-18 02:15:01,638 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/meta/1588230740 2023-07-18 02:15:01,641 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-18 02:15:01,645 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-18 02:15:01,646 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=105; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10580639680, jitterRate=-0.014601141214370728}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-18 02:15:01,646 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-18 02:15:01,647 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=82, masterSystemTime=1689646501568 2023-07-18 02:15:01,649 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-18 02:15:01,649 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-18 02:15:01,650 INFO [PEWorker-4] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,45077,1689646489555, state=OPEN 2023-07-18 02:15:01,652 DEBUG [Listener at localhost/38101-EventThread] zookeeper.ZKWatcher(600): master:40909-0x1017635d76e0000, quorum=127.0.0.1:54439, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-18 02:15:01,652 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-18 02:15:01,659 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=82, resume processing ppid=79 2023-07-18 02:15:01,659 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=82, ppid=79, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,45077,1689646489555 in 237 msec 2023-07-18 02:15:01,660 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=79, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE in 682 msec 2023-07-18 02:15:01,804 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close fbc284aeb66f3eaca0bb2d67e73a56a3 2023-07-18 02:15:01,805 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing fbc284aeb66f3eaca0bb2d67e73a56a3, disabling compactions & flushes 2023-07-18 02:15:01,806 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689646492720.fbc284aeb66f3eaca0bb2d67e73a56a3. 2023-07-18 02:15:01,806 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689646492720.fbc284aeb66f3eaca0bb2d67e73a56a3. 2023-07-18 02:15:01,806 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689646492720.fbc284aeb66f3eaca0bb2d67e73a56a3. after waiting 0 ms 2023-07-18 02:15:01,806 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689646492720.fbc284aeb66f3eaca0bb2d67e73a56a3. 2023-07-18 02:15:01,812 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/namespace/fbc284aeb66f3eaca0bb2d67e73a56a3/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=9 2023-07-18 02:15:01,814 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689646492720.fbc284aeb66f3eaca0bb2d67e73a56a3. 2023-07-18 02:15:01,814 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for fbc284aeb66f3eaca0bb2d67e73a56a3: 2023-07-18 02:15:01,814 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding fbc284aeb66f3eaca0bb2d67e73a56a3 move to jenkins-hbase4.apache.org,45077,1689646489555 record at close sequenceid=10 2023-07-18 02:15:01,815 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed fbc284aeb66f3eaca0bb2d67e73a56a3 2023-07-18 02:15:01,816 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=78 updating hbase:meta row=fbc284aeb66f3eaca0bb2d67e73a56a3, regionState=CLOSED 2023-07-18 02:15:01,816 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:namespace,,1689646492720.fbc284aeb66f3eaca0bb2d67e73a56a3.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689646501816"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689646501816"}]},"ts":"1689646501816"} 2023-07-18 02:15:01,817 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=43645] ipc.CallRunner(144): callId: 189 service: ClientService methodName: Mutate size: 218 connection: 172.31.14.131:42424 deadline: 1689646561817, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=45077 startCode=1689646489555. As of locationSeqNum=101. 2023-07-18 02:15:01,922 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=81, resume processing ppid=78 2023-07-18 02:15:01,922 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=81, ppid=78, state=SUCCESS; CloseRegionProcedure fbc284aeb66f3eaca0bb2d67e73a56a3, server=jenkins-hbase4.apache.org,43645,1689646493716 in 930 msec 2023-07-18 02:15:01,924 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=78, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=fbc284aeb66f3eaca0bb2d67e73a56a3, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,45077,1689646489555; forceNewPlan=false, retain=false 2023-07-18 02:15:01,981 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] procedure.ProcedureSyncWait(216): waitFor pid=78 2023-07-18 02:15:02,075 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=78 updating hbase:meta row=fbc284aeb66f3eaca0bb2d67e73a56a3, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45077,1689646489555 2023-07-18 02:15:02,075 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689646492720.fbc284aeb66f3eaca0bb2d67e73a56a3.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689646502075"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689646502075"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689646502075"}]},"ts":"1689646502075"} 2023-07-18 02:15:02,077 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=83, ppid=78, state=RUNNABLE; OpenRegionProcedure fbc284aeb66f3eaca0bb2d67e73a56a3, server=jenkins-hbase4.apache.org,45077,1689646489555}] 2023-07-18 02:15:02,234 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689646492720.fbc284aeb66f3eaca0bb2d67e73a56a3. 2023-07-18 02:15:02,234 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => fbc284aeb66f3eaca0bb2d67e73a56a3, NAME => 'hbase:namespace,,1689646492720.fbc284aeb66f3eaca0bb2d67e73a56a3.', STARTKEY => '', ENDKEY => ''} 2023-07-18 02:15:02,235 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace fbc284aeb66f3eaca0bb2d67e73a56a3 2023-07-18 02:15:02,235 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689646492720.fbc284aeb66f3eaca0bb2d67e73a56a3.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 02:15:02,235 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for fbc284aeb66f3eaca0bb2d67e73a56a3 2023-07-18 02:15:02,235 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for fbc284aeb66f3eaca0bb2d67e73a56a3 2023-07-18 02:15:02,237 INFO [StoreOpener-fbc284aeb66f3eaca0bb2d67e73a56a3-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region fbc284aeb66f3eaca0bb2d67e73a56a3 2023-07-18 02:15:02,239 DEBUG [StoreOpener-fbc284aeb66f3eaca0bb2d67e73a56a3-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/namespace/fbc284aeb66f3eaca0bb2d67e73a56a3/info 2023-07-18 02:15:02,239 DEBUG [StoreOpener-fbc284aeb66f3eaca0bb2d67e73a56a3-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/namespace/fbc284aeb66f3eaca0bb2d67e73a56a3/info 2023-07-18 02:15:02,239 INFO [StoreOpener-fbc284aeb66f3eaca0bb2d67e73a56a3-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region fbc284aeb66f3eaca0bb2d67e73a56a3 columnFamilyName info 2023-07-18 02:15:02,251 DEBUG [StoreOpener-fbc284aeb66f3eaca0bb2d67e73a56a3-1] regionserver.HStore(539): loaded hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/namespace/fbc284aeb66f3eaca0bb2d67e73a56a3/info/9b08322805ff412fa8b15e0d8d41867f 2023-07-18 02:15:02,251 INFO [StoreOpener-fbc284aeb66f3eaca0bb2d67e73a56a3-1] regionserver.HStore(310): Store=fbc284aeb66f3eaca0bb2d67e73a56a3/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 02:15:02,252 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/namespace/fbc284aeb66f3eaca0bb2d67e73a56a3 2023-07-18 02:15:02,253 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/namespace/fbc284aeb66f3eaca0bb2d67e73a56a3 2023-07-18 02:15:02,257 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for fbc284aeb66f3eaca0bb2d67e73a56a3 2023-07-18 02:15:02,258 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened fbc284aeb66f3eaca0bb2d67e73a56a3; next sequenceid=13; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9719065280, jitterRate=-0.09484151005744934}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 02:15:02,258 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for fbc284aeb66f3eaca0bb2d67e73a56a3: 2023-07-18 02:15:02,259 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689646492720.fbc284aeb66f3eaca0bb2d67e73a56a3., pid=83, masterSystemTime=1689646502229 2023-07-18 02:15:02,261 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689646492720.fbc284aeb66f3eaca0bb2d67e73a56a3. 2023-07-18 02:15:02,261 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689646492720.fbc284aeb66f3eaca0bb2d67e73a56a3. 2023-07-18 02:15:02,262 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=78 updating hbase:meta row=fbc284aeb66f3eaca0bb2d67e73a56a3, regionState=OPEN, openSeqNum=13, regionLocation=jenkins-hbase4.apache.org,45077,1689646489555 2023-07-18 02:15:02,262 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689646492720.fbc284aeb66f3eaca0bb2d67e73a56a3.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689646502262"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689646502262"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689646502262"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689646502262"}]},"ts":"1689646502262"} 2023-07-18 02:15:02,267 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=83, resume processing ppid=78 2023-07-18 02:15:02,267 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=83, ppid=78, state=SUCCESS; OpenRegionProcedure fbc284aeb66f3eaca0bb2d67e73a56a3, server=jenkins-hbase4.apache.org,45077,1689646489555 in 187 msec 2023-07-18 02:15:02,269 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=78, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=fbc284aeb66f3eaca0bb2d67e73a56a3, REOPEN/MOVE in 1.2920 sec 2023-07-18 02:15:02,982 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,35063,1689646489808, jenkins-hbase4.apache.org,39557,1689646489998, jenkins-hbase4.apache.org,43645,1689646493716] are moved back to default 2023-07-18 02:15:02,982 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(438): Move servers done: default => bar 2023-07-18 02:15:02,982 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 02:15:02,985 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:02,985 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:02,987 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bar 2023-07-18 02:15:02,988 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 02:15:02,989 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testFailRemoveGroup', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 02:15:02,990 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] procedure2.ProcedureExecutor(1029): Stored pid=84, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testFailRemoveGroup 2023-07-18 02:15:02,992 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=84, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 02:15:02,993 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testFailRemoveGroup" procId is: 84 2023-07-18 02:15:02,993 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(1230): Checking to see if procedure is done pid=84 2023-07-18 02:15:02,995 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:02,995 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-18 02:15:02,996 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 02:15:02,996 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 02:15:03,002 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=84, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-18 02:15:03,004 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testFailRemoveGroup/91b317f5dc93a86672aeff9195be5d55 2023-07-18 02:15:03,004 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testFailRemoveGroup/91b317f5dc93a86672aeff9195be5d55 empty. 2023-07-18 02:15:03,005 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testFailRemoveGroup/91b317f5dc93a86672aeff9195be5d55 2023-07-18 02:15:03,005 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived Group_testFailRemoveGroup regions 2023-07-18 02:15:03,020 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testFailRemoveGroup/.tabledesc/.tableinfo.0000000001 2023-07-18 02:15:03,021 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 91b317f5dc93a86672aeff9195be5d55, NAME => 'Group_testFailRemoveGroup,,1689646502989.91b317f5dc93a86672aeff9195be5d55.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='Group_testFailRemoveGroup', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp 2023-07-18 02:15:03,033 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689646502989.91b317f5dc93a86672aeff9195be5d55.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 02:15:03,033 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1604): Closing 91b317f5dc93a86672aeff9195be5d55, disabling compactions & flushes 2023-07-18 02:15:03,034 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689646502989.91b317f5dc93a86672aeff9195be5d55. 2023-07-18 02:15:03,034 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689646502989.91b317f5dc93a86672aeff9195be5d55. 2023-07-18 02:15:03,034 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689646502989.91b317f5dc93a86672aeff9195be5d55. after waiting 0 ms 2023-07-18 02:15:03,034 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689646502989.91b317f5dc93a86672aeff9195be5d55. 2023-07-18 02:15:03,034 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689646502989.91b317f5dc93a86672aeff9195be5d55. 2023-07-18 02:15:03,034 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1558): Region close journal for 91b317f5dc93a86672aeff9195be5d55: 2023-07-18 02:15:03,036 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=84, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-18 02:15:03,037 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689646502989.91b317f5dc93a86672aeff9195be5d55.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689646503037"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689646503037"}]},"ts":"1689646503037"} 2023-07-18 02:15:03,039 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-18 02:15:03,040 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=84, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-18 02:15:03,040 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689646503040"}]},"ts":"1689646503040"} 2023-07-18 02:15:03,041 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=ENABLING in hbase:meta 2023-07-18 02:15:03,044 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=85, ppid=84, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=91b317f5dc93a86672aeff9195be5d55, ASSIGN}] 2023-07-18 02:15:03,046 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=85, ppid=84, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=91b317f5dc93a86672aeff9195be5d55, ASSIGN 2023-07-18 02:15:03,047 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=85, ppid=84, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=91b317f5dc93a86672aeff9195be5d55, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,45077,1689646489555; forceNewPlan=false, retain=false 2023-07-18 02:15:03,095 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(1230): Checking to see if procedure is done pid=84 2023-07-18 02:15:03,198 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=85 updating hbase:meta row=91b317f5dc93a86672aeff9195be5d55, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45077,1689646489555 2023-07-18 02:15:03,199 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689646502989.91b317f5dc93a86672aeff9195be5d55.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689646503198"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689646503198"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689646503198"}]},"ts":"1689646503198"} 2023-07-18 02:15:03,200 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=86, ppid=85, state=RUNNABLE; OpenRegionProcedure 91b317f5dc93a86672aeff9195be5d55, server=jenkins-hbase4.apache.org,45077,1689646489555}] 2023-07-18 02:15:03,296 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(1230): Checking to see if procedure is done pid=84 2023-07-18 02:15:03,317 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-18 02:15:03,358 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689646502989.91b317f5dc93a86672aeff9195be5d55. 2023-07-18 02:15:03,358 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 91b317f5dc93a86672aeff9195be5d55, NAME => 'Group_testFailRemoveGroup,,1689646502989.91b317f5dc93a86672aeff9195be5d55.', STARTKEY => '', ENDKEY => ''} 2023-07-18 02:15:03,359 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup 91b317f5dc93a86672aeff9195be5d55 2023-07-18 02:15:03,359 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689646502989.91b317f5dc93a86672aeff9195be5d55.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 02:15:03,359 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 91b317f5dc93a86672aeff9195be5d55 2023-07-18 02:15:03,359 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 91b317f5dc93a86672aeff9195be5d55 2023-07-18 02:15:03,360 INFO [StoreOpener-91b317f5dc93a86672aeff9195be5d55-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 91b317f5dc93a86672aeff9195be5d55 2023-07-18 02:15:03,362 DEBUG [StoreOpener-91b317f5dc93a86672aeff9195be5d55-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testFailRemoveGroup/91b317f5dc93a86672aeff9195be5d55/f 2023-07-18 02:15:03,362 DEBUG [StoreOpener-91b317f5dc93a86672aeff9195be5d55-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testFailRemoveGroup/91b317f5dc93a86672aeff9195be5d55/f 2023-07-18 02:15:03,362 INFO [StoreOpener-91b317f5dc93a86672aeff9195be5d55-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 91b317f5dc93a86672aeff9195be5d55 columnFamilyName f 2023-07-18 02:15:03,363 INFO [StoreOpener-91b317f5dc93a86672aeff9195be5d55-1] regionserver.HStore(310): Store=91b317f5dc93a86672aeff9195be5d55/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 02:15:03,364 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testFailRemoveGroup/91b317f5dc93a86672aeff9195be5d55 2023-07-18 02:15:03,364 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testFailRemoveGroup/91b317f5dc93a86672aeff9195be5d55 2023-07-18 02:15:03,367 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 91b317f5dc93a86672aeff9195be5d55 2023-07-18 02:15:03,373 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testFailRemoveGroup/91b317f5dc93a86672aeff9195be5d55/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 02:15:03,374 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 91b317f5dc93a86672aeff9195be5d55; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11175922720, jitterRate=0.04083891212940216}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 02:15:03,374 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 91b317f5dc93a86672aeff9195be5d55: 2023-07-18 02:15:03,375 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689646502989.91b317f5dc93a86672aeff9195be5d55., pid=86, masterSystemTime=1689646503352 2023-07-18 02:15:03,378 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689646502989.91b317f5dc93a86672aeff9195be5d55. 2023-07-18 02:15:03,378 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689646502989.91b317f5dc93a86672aeff9195be5d55. 2023-07-18 02:15:03,379 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=85 updating hbase:meta row=91b317f5dc93a86672aeff9195be5d55, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,45077,1689646489555 2023-07-18 02:15:03,379 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689646502989.91b317f5dc93a86672aeff9195be5d55.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689646503379"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689646503379"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689646503379"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689646503379"}]},"ts":"1689646503379"} 2023-07-18 02:15:03,383 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=86, resume processing ppid=85 2023-07-18 02:15:03,383 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=86, ppid=85, state=SUCCESS; OpenRegionProcedure 91b317f5dc93a86672aeff9195be5d55, server=jenkins-hbase4.apache.org,45077,1689646489555 in 181 msec 2023-07-18 02:15:03,385 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=85, resume processing ppid=84 2023-07-18 02:15:03,385 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=85, ppid=84, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=91b317f5dc93a86672aeff9195be5d55, ASSIGN in 339 msec 2023-07-18 02:15:03,386 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=84, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-18 02:15:03,386 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689646503386"}]},"ts":"1689646503386"} 2023-07-18 02:15:03,388 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=ENABLED in hbase:meta 2023-07-18 02:15:03,392 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=84, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-18 02:15:03,393 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=84, state=SUCCESS; CreateTableProcedure table=Group_testFailRemoveGroup in 403 msec 2023-07-18 02:15:03,597 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(1230): Checking to see if procedure is done pid=84 2023-07-18 02:15:03,598 INFO [Listener at localhost/38101] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testFailRemoveGroup, procId: 84 completed 2023-07-18 02:15:03,598 DEBUG [Listener at localhost/38101] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testFailRemoveGroup get assigned. Timeout = 60000ms 2023-07-18 02:15:03,598 INFO [Listener at localhost/38101] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 02:15:03,601 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=43645] ipc.CallRunner(144): callId: 276 service: ClientService methodName: Scan size: 96 connection: 172.31.14.131:42436 deadline: 1689646563600, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=45077 startCode=1689646489555. As of locationSeqNum=101. 2023-07-18 02:15:03,702 DEBUG [hconnection-0x422d8bf2-shared-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-18 02:15:03,705 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:33484, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-18 02:15:03,717 INFO [Listener at localhost/38101] hbase.HBaseTestingUtility(3484): All regions for table Group_testFailRemoveGroup assigned to meta. Checking AM states. 2023-07-18 02:15:03,717 INFO [Listener at localhost/38101] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 02:15:03,717 INFO [Listener at localhost/38101] hbase.HBaseTestingUtility(3504): All regions for table Group_testFailRemoveGroup assigned. 2023-07-18 02:15:03,720 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testFailRemoveGroup] to rsgroup bar 2023-07-18 02:15:03,723 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:03,725 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-18 02:15:03,725 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 02:15:03,725 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 02:15:03,727 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testFailRemoveGroup to RSGroup bar 2023-07-18 02:15:03,727 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(345): Moving region 91b317f5dc93a86672aeff9195be5d55 to RSGroup bar 2023-07-18 02:15:03,728 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 02:15:03,728 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 02:15:03,728 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 02:15:03,728 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 02:15:03,728 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-18 02:15:03,728 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 02:15:03,731 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] procedure2.ProcedureExecutor(1029): Stored pid=87, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=91b317f5dc93a86672aeff9195be5d55, REOPEN/MOVE 2023-07-18 02:15:03,731 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group bar, current retry=0 2023-07-18 02:15:03,733 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=87, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=91b317f5dc93a86672aeff9195be5d55, REOPEN/MOVE 2023-07-18 02:15:03,734 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=91b317f5dc93a86672aeff9195be5d55, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,45077,1689646489555 2023-07-18 02:15:03,734 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689646502989.91b317f5dc93a86672aeff9195be5d55.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689646503734"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689646503734"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689646503734"}]},"ts":"1689646503734"} 2023-07-18 02:15:03,740 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=88, ppid=87, state=RUNNABLE; CloseRegionProcedure 91b317f5dc93a86672aeff9195be5d55, server=jenkins-hbase4.apache.org,45077,1689646489555}] 2023-07-18 02:15:03,894 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 91b317f5dc93a86672aeff9195be5d55 2023-07-18 02:15:03,897 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 91b317f5dc93a86672aeff9195be5d55, disabling compactions & flushes 2023-07-18 02:15:03,897 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689646502989.91b317f5dc93a86672aeff9195be5d55. 2023-07-18 02:15:03,897 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689646502989.91b317f5dc93a86672aeff9195be5d55. 2023-07-18 02:15:03,898 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689646502989.91b317f5dc93a86672aeff9195be5d55. after waiting 0 ms 2023-07-18 02:15:03,898 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689646502989.91b317f5dc93a86672aeff9195be5d55. 2023-07-18 02:15:03,902 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testFailRemoveGroup/91b317f5dc93a86672aeff9195be5d55/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 02:15:03,904 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689646502989.91b317f5dc93a86672aeff9195be5d55. 2023-07-18 02:15:03,904 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 91b317f5dc93a86672aeff9195be5d55: 2023-07-18 02:15:03,904 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 91b317f5dc93a86672aeff9195be5d55 move to jenkins-hbase4.apache.org,39557,1689646489998 record at close sequenceid=2 2023-07-18 02:15:03,906 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 91b317f5dc93a86672aeff9195be5d55 2023-07-18 02:15:03,907 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=91b317f5dc93a86672aeff9195be5d55, regionState=CLOSED 2023-07-18 02:15:03,907 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689646502989.91b317f5dc93a86672aeff9195be5d55.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689646503907"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689646503907"}]},"ts":"1689646503907"} 2023-07-18 02:15:03,910 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=88, resume processing ppid=87 2023-07-18 02:15:03,910 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=88, ppid=87, state=SUCCESS; CloseRegionProcedure 91b317f5dc93a86672aeff9195be5d55, server=jenkins-hbase4.apache.org,45077,1689646489555 in 172 msec 2023-07-18 02:15:03,911 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=87, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=91b317f5dc93a86672aeff9195be5d55, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,39557,1689646489998; forceNewPlan=false, retain=false 2023-07-18 02:15:04,061 INFO [jenkins-hbase4:40909] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-18 02:15:04,062 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=91b317f5dc93a86672aeff9195be5d55, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39557,1689646489998 2023-07-18 02:15:04,062 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689646502989.91b317f5dc93a86672aeff9195be5d55.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689646504062"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689646504062"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689646504062"}]},"ts":"1689646504062"} 2023-07-18 02:15:04,064 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=89, ppid=87, state=RUNNABLE; OpenRegionProcedure 91b317f5dc93a86672aeff9195be5d55, server=jenkins-hbase4.apache.org,39557,1689646489998}] 2023-07-18 02:15:04,221 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689646502989.91b317f5dc93a86672aeff9195be5d55. 2023-07-18 02:15:04,221 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 91b317f5dc93a86672aeff9195be5d55, NAME => 'Group_testFailRemoveGroup,,1689646502989.91b317f5dc93a86672aeff9195be5d55.', STARTKEY => '', ENDKEY => ''} 2023-07-18 02:15:04,221 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup 91b317f5dc93a86672aeff9195be5d55 2023-07-18 02:15:04,221 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689646502989.91b317f5dc93a86672aeff9195be5d55.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 02:15:04,222 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 91b317f5dc93a86672aeff9195be5d55 2023-07-18 02:15:04,222 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 91b317f5dc93a86672aeff9195be5d55 2023-07-18 02:15:04,227 INFO [StoreOpener-91b317f5dc93a86672aeff9195be5d55-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 91b317f5dc93a86672aeff9195be5d55 2023-07-18 02:15:04,228 DEBUG [StoreOpener-91b317f5dc93a86672aeff9195be5d55-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testFailRemoveGroup/91b317f5dc93a86672aeff9195be5d55/f 2023-07-18 02:15:04,228 DEBUG [StoreOpener-91b317f5dc93a86672aeff9195be5d55-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testFailRemoveGroup/91b317f5dc93a86672aeff9195be5d55/f 2023-07-18 02:15:04,228 INFO [StoreOpener-91b317f5dc93a86672aeff9195be5d55-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 91b317f5dc93a86672aeff9195be5d55 columnFamilyName f 2023-07-18 02:15:04,229 INFO [StoreOpener-91b317f5dc93a86672aeff9195be5d55-1] regionserver.HStore(310): Store=91b317f5dc93a86672aeff9195be5d55/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 02:15:04,230 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testFailRemoveGroup/91b317f5dc93a86672aeff9195be5d55 2023-07-18 02:15:04,231 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testFailRemoveGroup/91b317f5dc93a86672aeff9195be5d55 2023-07-18 02:15:04,235 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 91b317f5dc93a86672aeff9195be5d55 2023-07-18 02:15:04,237 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 91b317f5dc93a86672aeff9195be5d55; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10220109920, jitterRate=-0.048178091645240784}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 02:15:04,237 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 91b317f5dc93a86672aeff9195be5d55: 2023-07-18 02:15:04,238 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689646502989.91b317f5dc93a86672aeff9195be5d55., pid=89, masterSystemTime=1689646504216 2023-07-18 02:15:04,239 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689646502989.91b317f5dc93a86672aeff9195be5d55. 2023-07-18 02:15:04,240 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689646502989.91b317f5dc93a86672aeff9195be5d55. 2023-07-18 02:15:04,240 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=91b317f5dc93a86672aeff9195be5d55, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,39557,1689646489998 2023-07-18 02:15:04,240 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689646502989.91b317f5dc93a86672aeff9195be5d55.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689646504240"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689646504240"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689646504240"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689646504240"}]},"ts":"1689646504240"} 2023-07-18 02:15:04,255 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=89, resume processing ppid=87 2023-07-18 02:15:04,255 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=89, ppid=87, state=SUCCESS; OpenRegionProcedure 91b317f5dc93a86672aeff9195be5d55, server=jenkins-hbase4.apache.org,39557,1689646489998 in 178 msec 2023-07-18 02:15:04,257 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=87, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=91b317f5dc93a86672aeff9195be5d55, REOPEN/MOVE in 527 msec 2023-07-18 02:15:04,259 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'Group_testFailRemoveGroup' 2023-07-18 02:15:04,733 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] procedure.ProcedureSyncWait(216): waitFor pid=87 2023-07-18 02:15:04,733 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testFailRemoveGroup] moved to target group bar. 2023-07-18 02:15:04,733 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 02:15:04,736 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:04,737 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:04,739 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bar 2023-07-18 02:15:04,739 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 02:15:04,740 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-18 02:15:04,740 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 1 tables; you must remove these tables from the rsgroup before the rsgroup can be removed. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:490) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 02:15:04,740 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] ipc.CallRunner(144): callId: 286 service: MasterService methodName: ExecMasterService size: 85 connection: 172.31.14.131:39122 deadline: 1689647704740, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 1 tables; you must remove these tables from the rsgroup before the rsgroup can be removed. 2023-07-18 02:15:04,741 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:35063, jenkins-hbase4.apache.org:39557, jenkins-hbase4.apache.org:43645] to rsgroup default 2023-07-18 02:15:04,742 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Cannot leave a RSGroup bar that contains tables without servers to host them. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:428) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 02:15:04,742 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] ipc.CallRunner(144): callId: 288 service: MasterService methodName: ExecMasterService size: 188 connection: 172.31.14.131:39122 deadline: 1689647704741, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Cannot leave a RSGroup bar that contains tables without servers to host them. 2023-07-18 02:15:04,744 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testFailRemoveGroup] to rsgroup default 2023-07-18 02:15:04,747 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:04,747 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-18 02:15:04,748 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 02:15:04,748 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 02:15:04,750 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testFailRemoveGroup to RSGroup default 2023-07-18 02:15:04,750 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(345): Moving region 91b317f5dc93a86672aeff9195be5d55 to RSGroup default 2023-07-18 02:15:04,751 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] procedure2.ProcedureExecutor(1029): Stored pid=90, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=91b317f5dc93a86672aeff9195be5d55, REOPEN/MOVE 2023-07-18 02:15:04,752 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-18 02:15:04,753 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=90, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=91b317f5dc93a86672aeff9195be5d55, REOPEN/MOVE 2023-07-18 02:15:04,753 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=90 updating hbase:meta row=91b317f5dc93a86672aeff9195be5d55, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,39557,1689646489998 2023-07-18 02:15:04,754 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689646502989.91b317f5dc93a86672aeff9195be5d55.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689646504753"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689646504753"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689646504753"}]},"ts":"1689646504753"} 2023-07-18 02:15:04,755 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=91, ppid=90, state=RUNNABLE; CloseRegionProcedure 91b317f5dc93a86672aeff9195be5d55, server=jenkins-hbase4.apache.org,39557,1689646489998}] 2023-07-18 02:15:04,908 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 91b317f5dc93a86672aeff9195be5d55 2023-07-18 02:15:04,909 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 91b317f5dc93a86672aeff9195be5d55, disabling compactions & flushes 2023-07-18 02:15:04,910 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689646502989.91b317f5dc93a86672aeff9195be5d55. 2023-07-18 02:15:04,910 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689646502989.91b317f5dc93a86672aeff9195be5d55. 2023-07-18 02:15:04,910 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689646502989.91b317f5dc93a86672aeff9195be5d55. after waiting 0 ms 2023-07-18 02:15:04,910 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689646502989.91b317f5dc93a86672aeff9195be5d55. 2023-07-18 02:15:04,919 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testFailRemoveGroup/91b317f5dc93a86672aeff9195be5d55/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-18 02:15:04,920 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689646502989.91b317f5dc93a86672aeff9195be5d55. 2023-07-18 02:15:04,920 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 91b317f5dc93a86672aeff9195be5d55: 2023-07-18 02:15:04,920 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 91b317f5dc93a86672aeff9195be5d55 move to jenkins-hbase4.apache.org,45077,1689646489555 record at close sequenceid=5 2023-07-18 02:15:04,922 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 91b317f5dc93a86672aeff9195be5d55 2023-07-18 02:15:04,923 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=90 updating hbase:meta row=91b317f5dc93a86672aeff9195be5d55, regionState=CLOSED 2023-07-18 02:15:04,923 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689646502989.91b317f5dc93a86672aeff9195be5d55.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689646504923"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689646504923"}]},"ts":"1689646504923"} 2023-07-18 02:15:04,927 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=91, resume processing ppid=90 2023-07-18 02:15:04,927 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=91, ppid=90, state=SUCCESS; CloseRegionProcedure 91b317f5dc93a86672aeff9195be5d55, server=jenkins-hbase4.apache.org,39557,1689646489998 in 170 msec 2023-07-18 02:15:04,928 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=90, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=91b317f5dc93a86672aeff9195be5d55, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,45077,1689646489555; forceNewPlan=false, retain=false 2023-07-18 02:15:05,078 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=90 updating hbase:meta row=91b317f5dc93a86672aeff9195be5d55, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45077,1689646489555 2023-07-18 02:15:05,079 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689646502989.91b317f5dc93a86672aeff9195be5d55.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689646505078"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689646505078"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689646505078"}]},"ts":"1689646505078"} 2023-07-18 02:15:05,081 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=92, ppid=90, state=RUNNABLE; OpenRegionProcedure 91b317f5dc93a86672aeff9195be5d55, server=jenkins-hbase4.apache.org,45077,1689646489555}] 2023-07-18 02:15:05,237 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689646502989.91b317f5dc93a86672aeff9195be5d55. 2023-07-18 02:15:05,237 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 91b317f5dc93a86672aeff9195be5d55, NAME => 'Group_testFailRemoveGroup,,1689646502989.91b317f5dc93a86672aeff9195be5d55.', STARTKEY => '', ENDKEY => ''} 2023-07-18 02:15:05,237 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup 91b317f5dc93a86672aeff9195be5d55 2023-07-18 02:15:05,238 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689646502989.91b317f5dc93a86672aeff9195be5d55.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 02:15:05,238 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 91b317f5dc93a86672aeff9195be5d55 2023-07-18 02:15:05,238 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 91b317f5dc93a86672aeff9195be5d55 2023-07-18 02:15:05,239 INFO [StoreOpener-91b317f5dc93a86672aeff9195be5d55-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 91b317f5dc93a86672aeff9195be5d55 2023-07-18 02:15:05,240 DEBUG [StoreOpener-91b317f5dc93a86672aeff9195be5d55-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testFailRemoveGroup/91b317f5dc93a86672aeff9195be5d55/f 2023-07-18 02:15:05,240 DEBUG [StoreOpener-91b317f5dc93a86672aeff9195be5d55-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testFailRemoveGroup/91b317f5dc93a86672aeff9195be5d55/f 2023-07-18 02:15:05,241 INFO [StoreOpener-91b317f5dc93a86672aeff9195be5d55-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 91b317f5dc93a86672aeff9195be5d55 columnFamilyName f 2023-07-18 02:15:05,241 INFO [StoreOpener-91b317f5dc93a86672aeff9195be5d55-1] regionserver.HStore(310): Store=91b317f5dc93a86672aeff9195be5d55/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 02:15:05,242 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testFailRemoveGroup/91b317f5dc93a86672aeff9195be5d55 2023-07-18 02:15:05,243 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testFailRemoveGroup/91b317f5dc93a86672aeff9195be5d55 2023-07-18 02:15:05,246 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 91b317f5dc93a86672aeff9195be5d55 2023-07-18 02:15:05,247 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 91b317f5dc93a86672aeff9195be5d55; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11996436640, jitterRate=0.11725522577762604}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 02:15:05,247 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 91b317f5dc93a86672aeff9195be5d55: 2023-07-18 02:15:05,248 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689646502989.91b317f5dc93a86672aeff9195be5d55., pid=92, masterSystemTime=1689646505232 2023-07-18 02:15:05,249 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689646502989.91b317f5dc93a86672aeff9195be5d55. 2023-07-18 02:15:05,249 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689646502989.91b317f5dc93a86672aeff9195be5d55. 2023-07-18 02:15:05,250 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=90 updating hbase:meta row=91b317f5dc93a86672aeff9195be5d55, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,45077,1689646489555 2023-07-18 02:15:05,250 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689646502989.91b317f5dc93a86672aeff9195be5d55.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689646505250"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689646505250"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689646505250"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689646505250"}]},"ts":"1689646505250"} 2023-07-18 02:15:05,254 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=92, resume processing ppid=90 2023-07-18 02:15:05,254 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=92, ppid=90, state=SUCCESS; OpenRegionProcedure 91b317f5dc93a86672aeff9195be5d55, server=jenkins-hbase4.apache.org,45077,1689646489555 in 171 msec 2023-07-18 02:15:05,255 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=90, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=91b317f5dc93a86672aeff9195be5d55, REOPEN/MOVE in 504 msec 2023-07-18 02:15:05,753 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] procedure.ProcedureSyncWait(216): waitFor pid=90 2023-07-18 02:15:05,753 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testFailRemoveGroup] moved to target group default. 2023-07-18 02:15:05,753 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 02:15:05,756 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:05,757 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:05,760 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-18 02:15:05,760 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 3 servers; you must remove these servers from the RSGroup beforethe RSGroup can be removed. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:496) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 02:15:05,760 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] ipc.CallRunner(144): callId: 295 service: MasterService methodName: ExecMasterService size: 85 connection: 172.31.14.131:39122 deadline: 1689647705760, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 3 servers; you must remove these servers from the RSGroup beforethe RSGroup can be removed. 2023-07-18 02:15:05,761 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:35063, jenkins-hbase4.apache.org:39557, jenkins-hbase4.apache.org:43645] to rsgroup default 2023-07-18 02:15:05,764 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:05,765 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-18 02:15:05,765 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 02:15:05,765 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 02:15:05,773 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group bar, current retry=0 2023-07-18 02:15:05,773 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,35063,1689646489808, jenkins-hbase4.apache.org,39557,1689646489998, jenkins-hbase4.apache.org,43645,1689646493716] are moved back to bar 2023-07-18 02:15:05,774 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(438): Move servers done: bar => default 2023-07-18 02:15:05,774 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 02:15:05,777 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:05,777 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:05,780 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-18 02:15:05,782 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=43645] ipc.CallRunner(144): callId: 217 service: ClientService methodName: Scan size: 147 connection: 172.31.14.131:42424 deadline: 1689646565782, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=45077 startCode=1689646489555. As of locationSeqNum=10. 2023-07-18 02:15:06,086 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:06,087 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 02:15:06,087 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-18 02:15:06,089 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 02:15:06,092 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:06,092 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:06,094 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:06,094 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:06,096 INFO [Listener at localhost/38101] client.HBaseAdmin$15(890): Started disable of Group_testFailRemoveGroup 2023-07-18 02:15:06,097 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testFailRemoveGroup 2023-07-18 02:15:06,098 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] procedure2.ProcedureExecutor(1029): Stored pid=93, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testFailRemoveGroup 2023-07-18 02:15:06,101 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(1230): Checking to see if procedure is done pid=93 2023-07-18 02:15:06,101 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689646506101"}]},"ts":"1689646506101"} 2023-07-18 02:15:06,102 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=DISABLING in hbase:meta 2023-07-18 02:15:06,104 INFO [PEWorker-4] procedure.DisableTableProcedure(293): Set Group_testFailRemoveGroup to state=DISABLING 2023-07-18 02:15:06,105 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=94, ppid=93, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=91b317f5dc93a86672aeff9195be5d55, UNASSIGN}] 2023-07-18 02:15:06,106 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=94, ppid=93, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=91b317f5dc93a86672aeff9195be5d55, UNASSIGN 2023-07-18 02:15:06,107 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=94 updating hbase:meta row=91b317f5dc93a86672aeff9195be5d55, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,45077,1689646489555 2023-07-18 02:15:06,107 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689646502989.91b317f5dc93a86672aeff9195be5d55.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689646506107"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689646506107"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689646506107"}]},"ts":"1689646506107"} 2023-07-18 02:15:06,109 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=95, ppid=94, state=RUNNABLE; CloseRegionProcedure 91b317f5dc93a86672aeff9195be5d55, server=jenkins-hbase4.apache.org,45077,1689646489555}] 2023-07-18 02:15:06,202 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(1230): Checking to see if procedure is done pid=93 2023-07-18 02:15:06,262 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 91b317f5dc93a86672aeff9195be5d55 2023-07-18 02:15:06,263 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 91b317f5dc93a86672aeff9195be5d55, disabling compactions & flushes 2023-07-18 02:15:06,263 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689646502989.91b317f5dc93a86672aeff9195be5d55. 2023-07-18 02:15:06,264 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689646502989.91b317f5dc93a86672aeff9195be5d55. 2023-07-18 02:15:06,264 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689646502989.91b317f5dc93a86672aeff9195be5d55. after waiting 0 ms 2023-07-18 02:15:06,264 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689646502989.91b317f5dc93a86672aeff9195be5d55. 2023-07-18 02:15:06,268 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testFailRemoveGroup/91b317f5dc93a86672aeff9195be5d55/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-18 02:15:06,269 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689646502989.91b317f5dc93a86672aeff9195be5d55. 2023-07-18 02:15:06,269 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 91b317f5dc93a86672aeff9195be5d55: 2023-07-18 02:15:06,270 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 91b317f5dc93a86672aeff9195be5d55 2023-07-18 02:15:06,271 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=94 updating hbase:meta row=91b317f5dc93a86672aeff9195be5d55, regionState=CLOSED 2023-07-18 02:15:06,271 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689646502989.91b317f5dc93a86672aeff9195be5d55.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689646506271"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689646506271"}]},"ts":"1689646506271"} 2023-07-18 02:15:06,274 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=95, resume processing ppid=94 2023-07-18 02:15:06,274 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=95, ppid=94, state=SUCCESS; CloseRegionProcedure 91b317f5dc93a86672aeff9195be5d55, server=jenkins-hbase4.apache.org,45077,1689646489555 in 163 msec 2023-07-18 02:15:06,275 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=94, resume processing ppid=93 2023-07-18 02:15:06,275 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=94, ppid=93, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=91b317f5dc93a86672aeff9195be5d55, UNASSIGN in 169 msec 2023-07-18 02:15:06,276 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689646506276"}]},"ts":"1689646506276"} 2023-07-18 02:15:06,277 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=DISABLED in hbase:meta 2023-07-18 02:15:06,279 INFO [PEWorker-4] procedure.DisableTableProcedure(305): Set Group_testFailRemoveGroup to state=DISABLED 2023-07-18 02:15:06,280 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=93, state=SUCCESS; DisableTableProcedure table=Group_testFailRemoveGroup in 182 msec 2023-07-18 02:15:06,403 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(1230): Checking to see if procedure is done pid=93 2023-07-18 02:15:06,404 INFO [Listener at localhost/38101] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testFailRemoveGroup, procId: 93 completed 2023-07-18 02:15:06,405 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testFailRemoveGroup 2023-07-18 02:15:06,406 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] procedure2.ProcedureExecutor(1029): Stored pid=96, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-18 02:15:06,407 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=96, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-18 02:15:06,408 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testFailRemoveGroup' from rsgroup 'default' 2023-07-18 02:15:06,408 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=96, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-18 02:15:06,410 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:06,410 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 02:15:06,411 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 02:15:06,412 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testFailRemoveGroup/91b317f5dc93a86672aeff9195be5d55 2023-07-18 02:15:06,414 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testFailRemoveGroup/91b317f5dc93a86672aeff9195be5d55/f, FileablePath, hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testFailRemoveGroup/91b317f5dc93a86672aeff9195be5d55/recovered.edits] 2023-07-18 02:15:06,418 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(1230): Checking to see if procedure is done pid=96 2023-07-18 02:15:06,421 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testFailRemoveGroup/91b317f5dc93a86672aeff9195be5d55/recovered.edits/10.seqid to hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/archive/data/default/Group_testFailRemoveGroup/91b317f5dc93a86672aeff9195be5d55/recovered.edits/10.seqid 2023-07-18 02:15:06,421 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testFailRemoveGroup/91b317f5dc93a86672aeff9195be5d55 2023-07-18 02:15:06,422 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived Group_testFailRemoveGroup regions 2023-07-18 02:15:06,425 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=96, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-18 02:15:06,427 WARN [PEWorker-5] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of Group_testFailRemoveGroup from hbase:meta 2023-07-18 02:15:06,430 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(421): Removing 'Group_testFailRemoveGroup' descriptor. 2023-07-18 02:15:06,431 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=96, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-18 02:15:06,431 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(411): Removing 'Group_testFailRemoveGroup' from region states. 2023-07-18 02:15:06,431 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testFailRemoveGroup,,1689646502989.91b317f5dc93a86672aeff9195be5d55.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689646506431"}]},"ts":"9223372036854775807"} 2023-07-18 02:15:06,436 INFO [PEWorker-5] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-18 02:15:06,436 DEBUG [PEWorker-5] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 91b317f5dc93a86672aeff9195be5d55, NAME => 'Group_testFailRemoveGroup,,1689646502989.91b317f5dc93a86672aeff9195be5d55.', STARTKEY => '', ENDKEY => ''}] 2023-07-18 02:15:06,436 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(415): Marking 'Group_testFailRemoveGroup' as deleted. 2023-07-18 02:15:06,436 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689646506436"}]},"ts":"9223372036854775807"} 2023-07-18 02:15:06,440 INFO [PEWorker-5] hbase.MetaTableAccessor(1658): Deleted table Group_testFailRemoveGroup state from META 2023-07-18 02:15:06,443 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(130): Finished pid=96, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-18 02:15:06,444 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=96, state=SUCCESS; DeleteTableProcedure table=Group_testFailRemoveGroup in 38 msec 2023-07-18 02:15:06,519 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(1230): Checking to see if procedure is done pid=96 2023-07-18 02:15:06,519 INFO [Listener at localhost/38101] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testFailRemoveGroup, procId: 96 completed 2023-07-18 02:15:06,523 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:06,523 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:06,524 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 02:15:06,525 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 02:15:06,525 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 02:15:06,526 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 02:15:06,526 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 02:15:06,526 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 02:15:06,530 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:06,531 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 02:15:06,532 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 02:15:06,537 INFO [Listener at localhost/38101] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 02:15:06,538 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 02:15:06,540 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:06,541 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 02:15:06,542 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 02:15:06,547 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 02:15:06,550 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:06,550 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:06,552 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40909] to rsgroup master 2023-07-18 02:15:06,552 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40909 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 02:15:06,552 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] ipc.CallRunner(144): callId: 343 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:39122 deadline: 1689647706552, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40909 is either offline or it does not exist. 2023-07-18 02:15:06,552 WARN [Listener at localhost/38101] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40909 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40909 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 02:15:06,554 INFO [Listener at localhost/38101] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 02:15:06,554 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:06,555 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:06,555 INFO [Listener at localhost/38101] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35063, jenkins-hbase4.apache.org:39557, jenkins-hbase4.apache.org:43645, jenkins-hbase4.apache.org:45077], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 02:15:06,556 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 02:15:06,556 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 02:15:06,574 INFO [Listener at localhost/38101] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testFailRemoveGroup Thread=521 (was 507) Potentially hanging thread: hconnection-0x7c96b44e-shared-pool-13 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x7c96b44e-shared-pool-17 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1021434382_17 at /127.0.0.1:47176 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-295919230_17 at /127.0.0.1:43850 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x7c96b44e-shared-pool-14 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-295919230_17 at /127.0.0.1:38026 [Waiting for operation #8] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-566210079-172.31.14.131-1689646483854:blk_1073741860_1036, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7-prefix:jenkins-hbase4.apache.org,45077,1689646489555.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2e79eb29-shared-pool-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d9e1427-d9ef-a78e-d989-6465a7eb0c3a/cluster_114a01d3-d950-74e3-9098-0eab13676d5a/dfs/data/data1/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2e79eb29-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1021434382_17 at /127.0.0.1:47154 [Receiving block BP-566210079-172.31.14.131-1689646483854:blk_1073741860_1036] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2e79eb29-shared-pool-13 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-566210079-172.31.14.131-1689646483854:blk_1073741860_1036, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1021434382_17 at /127.0.0.1:37996 [Receiving block BP-566210079-172.31.14.131-1689646483854:blk_1073741860_1036] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2e79eb29-shared-pool-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x422d8bf2-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-10 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1021434382_17 at /127.0.0.1:43786 [Receiving block BP-566210079-172.31.14.131-1689646483854:blk_1073741860_1036] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x7c96b44e-shared-pool-16 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d9e1427-d9ef-a78e-d989-6465a7eb0c3a/cluster_114a01d3-d950-74e3-9098-0eab13676d5a/dfs/data/data4/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x7c96b44e-shared-pool-15 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-566210079-172.31.14.131-1689646483854:blk_1073741860_1036, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2e79eb29-shared-pool-12 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x7c96b44e-shared-pool-18 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d9e1427-d9ef-a78e-d989-6465a7eb0c3a/cluster_114a01d3-d950-74e3-9098-0eab13676d5a/dfs/data/data3/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d9e1427-d9ef-a78e-d989-6465a7eb0c3a/cluster_114a01d3-d950-74e3-9098-0eab13676d5a/dfs/data/data2/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=819 (was 802) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=399 (was 434), ProcessCount=172 (was 172), AvailableMemoryMB=2687 (was 2892) 2023-07-18 02:15:06,574 WARN [Listener at localhost/38101] hbase.ResourceChecker(130): Thread=521 is superior to 500 2023-07-18 02:15:06,590 INFO [Listener at localhost/38101] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testMultiTableMove Thread=521, OpenFileDescriptor=819, MaxFileDescriptor=60000, SystemLoadAverage=399, ProcessCount=172, AvailableMemoryMB=2686 2023-07-18 02:15:06,590 WARN [Listener at localhost/38101] hbase.ResourceChecker(130): Thread=521 is superior to 500 2023-07-18 02:15:06,591 INFO [Listener at localhost/38101] rsgroup.TestRSGroupsBase(132): testMultiTableMove 2023-07-18 02:15:06,594 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:06,594 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:06,595 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 02:15:06,595 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 02:15:06,595 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 02:15:06,596 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 02:15:06,596 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 02:15:06,597 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 02:15:06,600 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:06,600 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 02:15:06,602 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 02:15:06,605 INFO [Listener at localhost/38101] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 02:15:06,605 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 02:15:06,608 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:06,608 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 02:15:06,610 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 02:15:06,612 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 02:15:06,615 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:06,615 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:06,618 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40909] to rsgroup master 2023-07-18 02:15:06,618 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40909 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 02:15:06,618 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] ipc.CallRunner(144): callId: 371 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:39122 deadline: 1689647706618, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40909 is either offline or it does not exist. 2023-07-18 02:15:06,619 WARN [Listener at localhost/38101] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40909 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40909 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 02:15:06,624 INFO [Listener at localhost/38101] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 02:15:06,625 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:06,625 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:06,625 INFO [Listener at localhost/38101] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35063, jenkins-hbase4.apache.org:39557, jenkins-hbase4.apache.org:43645, jenkins-hbase4.apache.org:45077], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 02:15:06,626 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 02:15:06,626 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 02:15:06,627 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 02:15:06,628 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 02:15:06,629 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testMultiTableMove_1393305614 2023-07-18 02:15:06,631 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1393305614 2023-07-18 02:15:06,634 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:06,635 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 02:15:06,635 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 02:15:06,637 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 02:15:06,639 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:06,639 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:06,642 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:35063] to rsgroup Group_testMultiTableMove_1393305614 2023-07-18 02:15:06,644 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1393305614 2023-07-18 02:15:06,644 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:06,645 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 02:15:06,645 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 02:15:06,647 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-18 02:15:06,647 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,35063,1689646489808] are moved back to default 2023-07-18 02:15:06,647 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testMultiTableMove_1393305614 2023-07-18 02:15:06,647 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 02:15:06,649 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:06,649 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:06,652 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testMultiTableMove_1393305614 2023-07-18 02:15:06,652 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 02:15:06,654 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'GrouptestMultiTableMoveA', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 02:15:06,655 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] procedure2.ProcedureExecutor(1029): Stored pid=97, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=GrouptestMultiTableMoveA 2023-07-18 02:15:06,656 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 02:15:06,656 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "GrouptestMultiTableMoveA" procId is: 97 2023-07-18 02:15:06,657 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-18 02:15:06,658 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1393305614 2023-07-18 02:15:06,659 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:06,659 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 02:15:06,660 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 02:15:06,665 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-18 02:15:06,667 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/GrouptestMultiTableMoveA/eb303b133fa81bbeab9c33ccc3d43c79 2023-07-18 02:15:06,667 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/GrouptestMultiTableMoveA/eb303b133fa81bbeab9c33ccc3d43c79 empty. 2023-07-18 02:15:06,667 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/GrouptestMultiTableMoveA/eb303b133fa81bbeab9c33ccc3d43c79 2023-07-18 02:15:06,667 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveA regions 2023-07-18 02:15:06,683 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/GrouptestMultiTableMoveA/.tabledesc/.tableinfo.0000000001 2023-07-18 02:15:06,684 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(7675): creating {ENCODED => eb303b133fa81bbeab9c33ccc3d43c79, NAME => 'GrouptestMultiTableMoveA,,1689646506653.eb303b133fa81bbeab9c33ccc3d43c79.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='GrouptestMultiTableMoveA', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp 2023-07-18 02:15:06,697 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689646506653.eb303b133fa81bbeab9c33ccc3d43c79.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 02:15:06,697 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1604): Closing eb303b133fa81bbeab9c33ccc3d43c79, disabling compactions & flushes 2023-07-18 02:15:06,697 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689646506653.eb303b133fa81bbeab9c33ccc3d43c79. 2023-07-18 02:15:06,697 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689646506653.eb303b133fa81bbeab9c33ccc3d43c79. 2023-07-18 02:15:06,697 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689646506653.eb303b133fa81bbeab9c33ccc3d43c79. after waiting 0 ms 2023-07-18 02:15:06,697 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689646506653.eb303b133fa81bbeab9c33ccc3d43c79. 2023-07-18 02:15:06,697 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689646506653.eb303b133fa81bbeab9c33ccc3d43c79. 2023-07-18 02:15:06,697 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1558): Region close journal for eb303b133fa81bbeab9c33ccc3d43c79: 2023-07-18 02:15:06,700 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_ADD_TO_META 2023-07-18 02:15:06,701 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689646506653.eb303b133fa81bbeab9c33ccc3d43c79.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689646506701"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689646506701"}]},"ts":"1689646506701"} 2023-07-18 02:15:06,703 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-18 02:15:06,703 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-18 02:15:06,704 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689646506703"}]},"ts":"1689646506703"} 2023-07-18 02:15:06,705 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=ENABLING in hbase:meta 2023-07-18 02:15:06,712 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 02:15:06,712 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 02:15:06,712 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 02:15:06,712 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 02:15:06,712 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 02:15:06,713 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=eb303b133fa81bbeab9c33ccc3d43c79, ASSIGN}] 2023-07-18 02:15:06,714 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=eb303b133fa81bbeab9c33ccc3d43c79, ASSIGN 2023-07-18 02:15:06,715 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=eb303b133fa81bbeab9c33ccc3d43c79, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,39557,1689646489998; forceNewPlan=false, retain=false 2023-07-18 02:15:06,758 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-18 02:15:06,865 INFO [jenkins-hbase4:40909] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-18 02:15:06,867 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=98 updating hbase:meta row=eb303b133fa81bbeab9c33ccc3d43c79, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39557,1689646489998 2023-07-18 02:15:06,867 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689646506653.eb303b133fa81bbeab9c33ccc3d43c79.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689646506867"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689646506867"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689646506867"}]},"ts":"1689646506867"} 2023-07-18 02:15:06,869 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=99, ppid=98, state=RUNNABLE; OpenRegionProcedure eb303b133fa81bbeab9c33ccc3d43c79, server=jenkins-hbase4.apache.org,39557,1689646489998}] 2023-07-18 02:15:06,960 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-18 02:15:07,026 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveA,,1689646506653.eb303b133fa81bbeab9c33ccc3d43c79. 2023-07-18 02:15:07,026 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => eb303b133fa81bbeab9c33ccc3d43c79, NAME => 'GrouptestMultiTableMoveA,,1689646506653.eb303b133fa81bbeab9c33ccc3d43c79.', STARTKEY => '', ENDKEY => ''} 2023-07-18 02:15:07,026 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveA eb303b133fa81bbeab9c33ccc3d43c79 2023-07-18 02:15:07,026 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689646506653.eb303b133fa81bbeab9c33ccc3d43c79.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 02:15:07,026 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for eb303b133fa81bbeab9c33ccc3d43c79 2023-07-18 02:15:07,027 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for eb303b133fa81bbeab9c33ccc3d43c79 2023-07-18 02:15:07,028 INFO [StoreOpener-eb303b133fa81bbeab9c33ccc3d43c79-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region eb303b133fa81bbeab9c33ccc3d43c79 2023-07-18 02:15:07,030 DEBUG [StoreOpener-eb303b133fa81bbeab9c33ccc3d43c79-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/GrouptestMultiTableMoveA/eb303b133fa81bbeab9c33ccc3d43c79/f 2023-07-18 02:15:07,030 DEBUG [StoreOpener-eb303b133fa81bbeab9c33ccc3d43c79-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/GrouptestMultiTableMoveA/eb303b133fa81bbeab9c33ccc3d43c79/f 2023-07-18 02:15:07,030 INFO [StoreOpener-eb303b133fa81bbeab9c33ccc3d43c79-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region eb303b133fa81bbeab9c33ccc3d43c79 columnFamilyName f 2023-07-18 02:15:07,031 INFO [StoreOpener-eb303b133fa81bbeab9c33ccc3d43c79-1] regionserver.HStore(310): Store=eb303b133fa81bbeab9c33ccc3d43c79/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 02:15:07,032 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/GrouptestMultiTableMoveA/eb303b133fa81bbeab9c33ccc3d43c79 2023-07-18 02:15:07,032 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/GrouptestMultiTableMoveA/eb303b133fa81bbeab9c33ccc3d43c79 2023-07-18 02:15:07,036 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for eb303b133fa81bbeab9c33ccc3d43c79 2023-07-18 02:15:07,039 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/GrouptestMultiTableMoveA/eb303b133fa81bbeab9c33ccc3d43c79/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 02:15:07,039 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened eb303b133fa81bbeab9c33ccc3d43c79; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11178915680, jitterRate=0.041117653250694275}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 02:15:07,039 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for eb303b133fa81bbeab9c33ccc3d43c79: 2023-07-18 02:15:07,040 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveA,,1689646506653.eb303b133fa81bbeab9c33ccc3d43c79., pid=99, masterSystemTime=1689646507022 2023-07-18 02:15:07,042 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveA,,1689646506653.eb303b133fa81bbeab9c33ccc3d43c79. 2023-07-18 02:15:07,043 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveA,,1689646506653.eb303b133fa81bbeab9c33ccc3d43c79. 2023-07-18 02:15:07,043 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=98 updating hbase:meta row=eb303b133fa81bbeab9c33ccc3d43c79, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,39557,1689646489998 2023-07-18 02:15:07,043 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveA,,1689646506653.eb303b133fa81bbeab9c33ccc3d43c79.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689646507043"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689646507043"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689646507043"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689646507043"}]},"ts":"1689646507043"} 2023-07-18 02:15:07,047 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=99, resume processing ppid=98 2023-07-18 02:15:07,048 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=99, ppid=98, state=SUCCESS; OpenRegionProcedure eb303b133fa81bbeab9c33ccc3d43c79, server=jenkins-hbase4.apache.org,39557,1689646489998 in 176 msec 2023-07-18 02:15:07,050 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=98, resume processing ppid=97 2023-07-18 02:15:07,050 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=98, ppid=97, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=eb303b133fa81bbeab9c33ccc3d43c79, ASSIGN in 335 msec 2023-07-18 02:15:07,051 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-18 02:15:07,051 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689646507051"}]},"ts":"1689646507051"} 2023-07-18 02:15:07,053 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=ENABLED in hbase:meta 2023-07-18 02:15:07,057 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_POST_OPERATION 2023-07-18 02:15:07,059 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=97, state=SUCCESS; CreateTableProcedure table=GrouptestMultiTableMoveA in 403 msec 2023-07-18 02:15:07,262 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-18 02:15:07,262 INFO [Listener at localhost/38101] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:GrouptestMultiTableMoveA, procId: 97 completed 2023-07-18 02:15:07,262 DEBUG [Listener at localhost/38101] hbase.HBaseTestingUtility(3430): Waiting until all regions of table GrouptestMultiTableMoveA get assigned. Timeout = 60000ms 2023-07-18 02:15:07,263 INFO [Listener at localhost/38101] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 02:15:07,268 INFO [Listener at localhost/38101] hbase.HBaseTestingUtility(3484): All regions for table GrouptestMultiTableMoveA assigned to meta. Checking AM states. 2023-07-18 02:15:07,269 INFO [Listener at localhost/38101] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 02:15:07,269 INFO [Listener at localhost/38101] hbase.HBaseTestingUtility(3504): All regions for table GrouptestMultiTableMoveA assigned. 2023-07-18 02:15:07,271 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'GrouptestMultiTableMoveB', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 02:15:07,272 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] procedure2.ProcedureExecutor(1029): Stored pid=100, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=GrouptestMultiTableMoveB 2023-07-18 02:15:07,274 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=100, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 02:15:07,275 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "GrouptestMultiTableMoveB" procId is: 100 2023-07-18 02:15:07,276 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(1230): Checking to see if procedure is done pid=100 2023-07-18 02:15:07,280 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1393305614 2023-07-18 02:15:07,281 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:07,282 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 02:15:07,282 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 02:15:07,286 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=100, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-18 02:15:07,289 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/GrouptestMultiTableMoveB/9300591cdd331b2c94f0c2611e778d0b 2023-07-18 02:15:07,289 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/GrouptestMultiTableMoveB/9300591cdd331b2c94f0c2611e778d0b empty. 2023-07-18 02:15:07,290 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/GrouptestMultiTableMoveB/9300591cdd331b2c94f0c2611e778d0b 2023-07-18 02:15:07,290 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveB regions 2023-07-18 02:15:07,318 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/GrouptestMultiTableMoveB/.tabledesc/.tableinfo.0000000001 2023-07-18 02:15:07,320 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(7675): creating {ENCODED => 9300591cdd331b2c94f0c2611e778d0b, NAME => 'GrouptestMultiTableMoveB,,1689646507270.9300591cdd331b2c94f0c2611e778d0b.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='GrouptestMultiTableMoveB', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp 2023-07-18 02:15:07,336 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689646507270.9300591cdd331b2c94f0c2611e778d0b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 02:15:07,336 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1604): Closing 9300591cdd331b2c94f0c2611e778d0b, disabling compactions & flushes 2023-07-18 02:15:07,336 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689646507270.9300591cdd331b2c94f0c2611e778d0b. 2023-07-18 02:15:07,336 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689646507270.9300591cdd331b2c94f0c2611e778d0b. 2023-07-18 02:15:07,336 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689646507270.9300591cdd331b2c94f0c2611e778d0b. after waiting 0 ms 2023-07-18 02:15:07,337 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689646507270.9300591cdd331b2c94f0c2611e778d0b. 2023-07-18 02:15:07,337 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689646507270.9300591cdd331b2c94f0c2611e778d0b. 2023-07-18 02:15:07,337 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1558): Region close journal for 9300591cdd331b2c94f0c2611e778d0b: 2023-07-18 02:15:07,339 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=100, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_ADD_TO_META 2023-07-18 02:15:07,340 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689646507270.9300591cdd331b2c94f0c2611e778d0b.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689646507340"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689646507340"}]},"ts":"1689646507340"} 2023-07-18 02:15:07,341 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-18 02:15:07,342 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=100, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-18 02:15:07,342 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689646507342"}]},"ts":"1689646507342"} 2023-07-18 02:15:07,343 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=ENABLING in hbase:meta 2023-07-18 02:15:07,347 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 02:15:07,347 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 02:15:07,347 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 02:15:07,347 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 02:15:07,347 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 02:15:07,347 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=101, ppid=100, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=9300591cdd331b2c94f0c2611e778d0b, ASSIGN}] 2023-07-18 02:15:07,349 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=101, ppid=100, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=9300591cdd331b2c94f0c2611e778d0b, ASSIGN 2023-07-18 02:15:07,349 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=101, ppid=100, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=9300591cdd331b2c94f0c2611e778d0b, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,39557,1689646489998; forceNewPlan=false, retain=false 2023-07-18 02:15:07,377 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(1230): Checking to see if procedure is done pid=100 2023-07-18 02:15:07,500 INFO [jenkins-hbase4:40909] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-18 02:15:07,501 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=9300591cdd331b2c94f0c2611e778d0b, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39557,1689646489998 2023-07-18 02:15:07,501 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689646507270.9300591cdd331b2c94f0c2611e778d0b.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689646507501"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689646507501"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689646507501"}]},"ts":"1689646507501"} 2023-07-18 02:15:07,503 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=102, ppid=101, state=RUNNABLE; OpenRegionProcedure 9300591cdd331b2c94f0c2611e778d0b, server=jenkins-hbase4.apache.org,39557,1689646489998}] 2023-07-18 02:15:07,578 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(1230): Checking to see if procedure is done pid=100 2023-07-18 02:15:07,659 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveB,,1689646507270.9300591cdd331b2c94f0c2611e778d0b. 2023-07-18 02:15:07,659 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 9300591cdd331b2c94f0c2611e778d0b, NAME => 'GrouptestMultiTableMoveB,,1689646507270.9300591cdd331b2c94f0c2611e778d0b.', STARTKEY => '', ENDKEY => ''} 2023-07-18 02:15:07,659 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveB 9300591cdd331b2c94f0c2611e778d0b 2023-07-18 02:15:07,659 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689646507270.9300591cdd331b2c94f0c2611e778d0b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 02:15:07,659 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 9300591cdd331b2c94f0c2611e778d0b 2023-07-18 02:15:07,659 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 9300591cdd331b2c94f0c2611e778d0b 2023-07-18 02:15:07,661 INFO [StoreOpener-9300591cdd331b2c94f0c2611e778d0b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 9300591cdd331b2c94f0c2611e778d0b 2023-07-18 02:15:07,662 DEBUG [StoreOpener-9300591cdd331b2c94f0c2611e778d0b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/GrouptestMultiTableMoveB/9300591cdd331b2c94f0c2611e778d0b/f 2023-07-18 02:15:07,663 DEBUG [StoreOpener-9300591cdd331b2c94f0c2611e778d0b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/GrouptestMultiTableMoveB/9300591cdd331b2c94f0c2611e778d0b/f 2023-07-18 02:15:07,663 INFO [StoreOpener-9300591cdd331b2c94f0c2611e778d0b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 9300591cdd331b2c94f0c2611e778d0b columnFamilyName f 2023-07-18 02:15:07,664 INFO [StoreOpener-9300591cdd331b2c94f0c2611e778d0b-1] regionserver.HStore(310): Store=9300591cdd331b2c94f0c2611e778d0b/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 02:15:07,665 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/GrouptestMultiTableMoveB/9300591cdd331b2c94f0c2611e778d0b 2023-07-18 02:15:07,665 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/GrouptestMultiTableMoveB/9300591cdd331b2c94f0c2611e778d0b 2023-07-18 02:15:07,671 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 9300591cdd331b2c94f0c2611e778d0b 2023-07-18 02:15:07,674 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/GrouptestMultiTableMoveB/9300591cdd331b2c94f0c2611e778d0b/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 02:15:07,674 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 9300591cdd331b2c94f0c2611e778d0b; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11546910080, jitterRate=0.0753898024559021}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 02:15:07,674 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 9300591cdd331b2c94f0c2611e778d0b: 2023-07-18 02:15:07,675 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveB,,1689646507270.9300591cdd331b2c94f0c2611e778d0b., pid=102, masterSystemTime=1689646507654 2023-07-18 02:15:07,677 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveB,,1689646507270.9300591cdd331b2c94f0c2611e778d0b. 2023-07-18 02:15:07,677 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveB,,1689646507270.9300591cdd331b2c94f0c2611e778d0b. 2023-07-18 02:15:07,678 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=9300591cdd331b2c94f0c2611e778d0b, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,39557,1689646489998 2023-07-18 02:15:07,678 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveB,,1689646507270.9300591cdd331b2c94f0c2611e778d0b.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689646507678"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689646507678"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689646507678"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689646507678"}]},"ts":"1689646507678"} 2023-07-18 02:15:07,682 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=102, resume processing ppid=101 2023-07-18 02:15:07,682 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=102, ppid=101, state=SUCCESS; OpenRegionProcedure 9300591cdd331b2c94f0c2611e778d0b, server=jenkins-hbase4.apache.org,39557,1689646489998 in 176 msec 2023-07-18 02:15:07,683 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=101, resume processing ppid=100 2023-07-18 02:15:07,684 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=101, ppid=100, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=9300591cdd331b2c94f0c2611e778d0b, ASSIGN in 335 msec 2023-07-18 02:15:07,684 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=100, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-18 02:15:07,684 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689646507684"}]},"ts":"1689646507684"} 2023-07-18 02:15:07,686 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=ENABLED in hbase:meta 2023-07-18 02:15:07,688 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=100, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_POST_OPERATION 2023-07-18 02:15:07,690 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=100, state=SUCCESS; CreateTableProcedure table=GrouptestMultiTableMoveB in 417 msec 2023-07-18 02:15:07,880 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(1230): Checking to see if procedure is done pid=100 2023-07-18 02:15:07,880 INFO [Listener at localhost/38101] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:GrouptestMultiTableMoveB, procId: 100 completed 2023-07-18 02:15:07,880 DEBUG [Listener at localhost/38101] hbase.HBaseTestingUtility(3430): Waiting until all regions of table GrouptestMultiTableMoveB get assigned. Timeout = 60000ms 2023-07-18 02:15:07,881 INFO [Listener at localhost/38101] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 02:15:07,885 INFO [Listener at localhost/38101] hbase.HBaseTestingUtility(3484): All regions for table GrouptestMultiTableMoveB assigned to meta. Checking AM states. 2023-07-18 02:15:07,886 INFO [Listener at localhost/38101] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 02:15:07,886 INFO [Listener at localhost/38101] hbase.HBaseTestingUtility(3504): All regions for table GrouptestMultiTableMoveB assigned. 2023-07-18 02:15:07,887 INFO [Listener at localhost/38101] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 02:15:07,899 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveA 2023-07-18 02:15:07,899 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 02:15:07,900 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveB 2023-07-18 02:15:07,900 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 02:15:07,901 INFO [Listener at localhost/38101] rsgroup.TestRSGroupsAdmin1(262): Moving table [GrouptestMultiTableMoveA,GrouptestMultiTableMoveB] to Group_testMultiTableMove_1393305614 2023-07-18 02:15:07,904 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [GrouptestMultiTableMoveB, GrouptestMultiTableMoveA] to rsgroup Group_testMultiTableMove_1393305614 2023-07-18 02:15:07,906 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1393305614 2023-07-18 02:15:07,907 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:07,908 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 02:15:07,908 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 02:15:07,910 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(339): Moving region(s) for table GrouptestMultiTableMoveB to RSGroup Group_testMultiTableMove_1393305614 2023-07-18 02:15:07,910 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(345): Moving region 9300591cdd331b2c94f0c2611e778d0b to RSGroup Group_testMultiTableMove_1393305614 2023-07-18 02:15:07,911 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] procedure2.ProcedureExecutor(1029): Stored pid=103, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=9300591cdd331b2c94f0c2611e778d0b, REOPEN/MOVE 2023-07-18 02:15:07,911 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(339): Moving region(s) for table GrouptestMultiTableMoveA to RSGroup Group_testMultiTableMove_1393305614 2023-07-18 02:15:07,912 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(345): Moving region eb303b133fa81bbeab9c33ccc3d43c79 to RSGroup Group_testMultiTableMove_1393305614 2023-07-18 02:15:07,912 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=103, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=9300591cdd331b2c94f0c2611e778d0b, REOPEN/MOVE 2023-07-18 02:15:07,913 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] procedure2.ProcedureExecutor(1029): Stored pid=104, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=eb303b133fa81bbeab9c33ccc3d43c79, REOPEN/MOVE 2023-07-18 02:15:07,913 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=103 updating hbase:meta row=9300591cdd331b2c94f0c2611e778d0b, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,39557,1689646489998 2023-07-18 02:15:07,913 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(286): Moving 2 region(s) to group Group_testMultiTableMove_1393305614, current retry=0 2023-07-18 02:15:07,916 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=104, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=eb303b133fa81bbeab9c33ccc3d43c79, REOPEN/MOVE 2023-07-18 02:15:07,916 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689646507270.9300591cdd331b2c94f0c2611e778d0b.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689646507913"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689646507913"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689646507913"}]},"ts":"1689646507913"} 2023-07-18 02:15:07,916 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=104 updating hbase:meta row=eb303b133fa81bbeab9c33ccc3d43c79, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,39557,1689646489998 2023-07-18 02:15:07,917 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689646506653.eb303b133fa81bbeab9c33ccc3d43c79.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689646507916"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689646507916"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689646507916"}]},"ts":"1689646507916"} 2023-07-18 02:15:07,917 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=105, ppid=103, state=RUNNABLE; CloseRegionProcedure 9300591cdd331b2c94f0c2611e778d0b, server=jenkins-hbase4.apache.org,39557,1689646489998}] 2023-07-18 02:15:07,918 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=106, ppid=104, state=RUNNABLE; CloseRegionProcedure eb303b133fa81bbeab9c33ccc3d43c79, server=jenkins-hbase4.apache.org,39557,1689646489998}] 2023-07-18 02:15:08,071 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 9300591cdd331b2c94f0c2611e778d0b 2023-07-18 02:15:08,072 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 9300591cdd331b2c94f0c2611e778d0b, disabling compactions & flushes 2023-07-18 02:15:08,072 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689646507270.9300591cdd331b2c94f0c2611e778d0b. 2023-07-18 02:15:08,072 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689646507270.9300591cdd331b2c94f0c2611e778d0b. 2023-07-18 02:15:08,072 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689646507270.9300591cdd331b2c94f0c2611e778d0b. after waiting 0 ms 2023-07-18 02:15:08,072 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689646507270.9300591cdd331b2c94f0c2611e778d0b. 2023-07-18 02:15:08,078 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/GrouptestMultiTableMoveB/9300591cdd331b2c94f0c2611e778d0b/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 02:15:08,079 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689646507270.9300591cdd331b2c94f0c2611e778d0b. 2023-07-18 02:15:08,079 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 9300591cdd331b2c94f0c2611e778d0b: 2023-07-18 02:15:08,079 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 9300591cdd331b2c94f0c2611e778d0b move to jenkins-hbase4.apache.org,35063,1689646489808 record at close sequenceid=2 2023-07-18 02:15:08,081 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 9300591cdd331b2c94f0c2611e778d0b 2023-07-18 02:15:08,081 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close eb303b133fa81bbeab9c33ccc3d43c79 2023-07-18 02:15:08,082 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing eb303b133fa81bbeab9c33ccc3d43c79, disabling compactions & flushes 2023-07-18 02:15:08,082 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689646506653.eb303b133fa81bbeab9c33ccc3d43c79. 2023-07-18 02:15:08,082 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689646506653.eb303b133fa81bbeab9c33ccc3d43c79. 2023-07-18 02:15:08,082 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689646506653.eb303b133fa81bbeab9c33ccc3d43c79. after waiting 0 ms 2023-07-18 02:15:08,082 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689646506653.eb303b133fa81bbeab9c33ccc3d43c79. 2023-07-18 02:15:08,088 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=103 updating hbase:meta row=9300591cdd331b2c94f0c2611e778d0b, regionState=CLOSED 2023-07-18 02:15:08,088 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689646507270.9300591cdd331b2c94f0c2611e778d0b.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689646508088"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689646508088"}]},"ts":"1689646508088"} 2023-07-18 02:15:08,095 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/GrouptestMultiTableMoveA/eb303b133fa81bbeab9c33ccc3d43c79/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 02:15:08,096 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689646506653.eb303b133fa81bbeab9c33ccc3d43c79. 2023-07-18 02:15:08,096 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for eb303b133fa81bbeab9c33ccc3d43c79: 2023-07-18 02:15:08,096 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding eb303b133fa81bbeab9c33ccc3d43c79 move to jenkins-hbase4.apache.org,35063,1689646489808 record at close sequenceid=2 2023-07-18 02:15:08,100 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed eb303b133fa81bbeab9c33ccc3d43c79 2023-07-18 02:15:08,100 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=104 updating hbase:meta row=eb303b133fa81bbeab9c33ccc3d43c79, regionState=CLOSED 2023-07-18 02:15:08,100 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689646506653.eb303b133fa81bbeab9c33ccc3d43c79.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689646508100"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689646508100"}]},"ts":"1689646508100"} 2023-07-18 02:15:08,101 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=105, resume processing ppid=103 2023-07-18 02:15:08,102 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=105, ppid=103, state=SUCCESS; CloseRegionProcedure 9300591cdd331b2c94f0c2611e778d0b, server=jenkins-hbase4.apache.org,39557,1689646489998 in 174 msec 2023-07-18 02:15:08,103 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=103, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=9300591cdd331b2c94f0c2611e778d0b, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,35063,1689646489808; forceNewPlan=false, retain=false 2023-07-18 02:15:08,104 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=106, resume processing ppid=104 2023-07-18 02:15:08,104 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=106, ppid=104, state=SUCCESS; CloseRegionProcedure eb303b133fa81bbeab9c33ccc3d43c79, server=jenkins-hbase4.apache.org,39557,1689646489998 in 184 msec 2023-07-18 02:15:08,108 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=104, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=eb303b133fa81bbeab9c33ccc3d43c79, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,35063,1689646489808; forceNewPlan=false, retain=false 2023-07-18 02:15:08,253 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=103 updating hbase:meta row=9300591cdd331b2c94f0c2611e778d0b, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,35063,1689646489808 2023-07-18 02:15:08,253 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=104 updating hbase:meta row=eb303b133fa81bbeab9c33ccc3d43c79, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,35063,1689646489808 2023-07-18 02:15:08,253 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689646507270.9300591cdd331b2c94f0c2611e778d0b.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689646508253"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689646508253"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689646508253"}]},"ts":"1689646508253"} 2023-07-18 02:15:08,253 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689646506653.eb303b133fa81bbeab9c33ccc3d43c79.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689646508253"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689646508253"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689646508253"}]},"ts":"1689646508253"} 2023-07-18 02:15:08,255 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=107, ppid=103, state=RUNNABLE; OpenRegionProcedure 9300591cdd331b2c94f0c2611e778d0b, server=jenkins-hbase4.apache.org,35063,1689646489808}] 2023-07-18 02:15:08,256 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=108, ppid=104, state=RUNNABLE; OpenRegionProcedure eb303b133fa81bbeab9c33ccc3d43c79, server=jenkins-hbase4.apache.org,35063,1689646489808}] 2023-07-18 02:15:08,410 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveB,,1689646507270.9300591cdd331b2c94f0c2611e778d0b. 2023-07-18 02:15:08,410 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 9300591cdd331b2c94f0c2611e778d0b, NAME => 'GrouptestMultiTableMoveB,,1689646507270.9300591cdd331b2c94f0c2611e778d0b.', STARTKEY => '', ENDKEY => ''} 2023-07-18 02:15:08,411 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveB 9300591cdd331b2c94f0c2611e778d0b 2023-07-18 02:15:08,411 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689646507270.9300591cdd331b2c94f0c2611e778d0b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 02:15:08,411 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 9300591cdd331b2c94f0c2611e778d0b 2023-07-18 02:15:08,411 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 9300591cdd331b2c94f0c2611e778d0b 2023-07-18 02:15:08,413 INFO [StoreOpener-9300591cdd331b2c94f0c2611e778d0b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 9300591cdd331b2c94f0c2611e778d0b 2023-07-18 02:15:08,415 DEBUG [StoreOpener-9300591cdd331b2c94f0c2611e778d0b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/GrouptestMultiTableMoveB/9300591cdd331b2c94f0c2611e778d0b/f 2023-07-18 02:15:08,415 DEBUG [StoreOpener-9300591cdd331b2c94f0c2611e778d0b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/GrouptestMultiTableMoveB/9300591cdd331b2c94f0c2611e778d0b/f 2023-07-18 02:15:08,415 INFO [StoreOpener-9300591cdd331b2c94f0c2611e778d0b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 9300591cdd331b2c94f0c2611e778d0b columnFamilyName f 2023-07-18 02:15:08,416 INFO [StoreOpener-9300591cdd331b2c94f0c2611e778d0b-1] regionserver.HStore(310): Store=9300591cdd331b2c94f0c2611e778d0b/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 02:15:08,417 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/GrouptestMultiTableMoveB/9300591cdd331b2c94f0c2611e778d0b 2023-07-18 02:15:08,418 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/GrouptestMultiTableMoveB/9300591cdd331b2c94f0c2611e778d0b 2023-07-18 02:15:08,421 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 9300591cdd331b2c94f0c2611e778d0b 2023-07-18 02:15:08,422 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 9300591cdd331b2c94f0c2611e778d0b; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10013886720, jitterRate=-0.06738412380218506}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 02:15:08,422 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 9300591cdd331b2c94f0c2611e778d0b: 2023-07-18 02:15:08,423 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveB,,1689646507270.9300591cdd331b2c94f0c2611e778d0b., pid=107, masterSystemTime=1689646508407 2023-07-18 02:15:08,424 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveB,,1689646507270.9300591cdd331b2c94f0c2611e778d0b. 2023-07-18 02:15:08,424 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveB,,1689646507270.9300591cdd331b2c94f0c2611e778d0b. 2023-07-18 02:15:08,425 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveA,,1689646506653.eb303b133fa81bbeab9c33ccc3d43c79. 2023-07-18 02:15:08,425 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => eb303b133fa81bbeab9c33ccc3d43c79, NAME => 'GrouptestMultiTableMoveA,,1689646506653.eb303b133fa81bbeab9c33ccc3d43c79.', STARTKEY => '', ENDKEY => ''} 2023-07-18 02:15:08,425 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=103 updating hbase:meta row=9300591cdd331b2c94f0c2611e778d0b, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,35063,1689646489808 2023-07-18 02:15:08,425 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveA eb303b133fa81bbeab9c33ccc3d43c79 2023-07-18 02:15:08,425 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689646506653.eb303b133fa81bbeab9c33ccc3d43c79.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 02:15:08,425 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveB,,1689646507270.9300591cdd331b2c94f0c2611e778d0b.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689646508425"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689646508425"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689646508425"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689646508425"}]},"ts":"1689646508425"} 2023-07-18 02:15:08,425 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for eb303b133fa81bbeab9c33ccc3d43c79 2023-07-18 02:15:08,425 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for eb303b133fa81bbeab9c33ccc3d43c79 2023-07-18 02:15:08,429 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=107, resume processing ppid=103 2023-07-18 02:15:08,429 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=107, ppid=103, state=SUCCESS; OpenRegionProcedure 9300591cdd331b2c94f0c2611e778d0b, server=jenkins-hbase4.apache.org,35063,1689646489808 in 172 msec 2023-07-18 02:15:08,430 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=103, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=9300591cdd331b2c94f0c2611e778d0b, REOPEN/MOVE in 519 msec 2023-07-18 02:15:08,431 INFO [StoreOpener-eb303b133fa81bbeab9c33ccc3d43c79-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region eb303b133fa81bbeab9c33ccc3d43c79 2023-07-18 02:15:08,432 DEBUG [StoreOpener-eb303b133fa81bbeab9c33ccc3d43c79-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/GrouptestMultiTableMoveA/eb303b133fa81bbeab9c33ccc3d43c79/f 2023-07-18 02:15:08,432 DEBUG [StoreOpener-eb303b133fa81bbeab9c33ccc3d43c79-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/GrouptestMultiTableMoveA/eb303b133fa81bbeab9c33ccc3d43c79/f 2023-07-18 02:15:08,432 INFO [StoreOpener-eb303b133fa81bbeab9c33ccc3d43c79-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region eb303b133fa81bbeab9c33ccc3d43c79 columnFamilyName f 2023-07-18 02:15:08,433 INFO [StoreOpener-eb303b133fa81bbeab9c33ccc3d43c79-1] regionserver.HStore(310): Store=eb303b133fa81bbeab9c33ccc3d43c79/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 02:15:08,434 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/GrouptestMultiTableMoveA/eb303b133fa81bbeab9c33ccc3d43c79 2023-07-18 02:15:08,435 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/GrouptestMultiTableMoveA/eb303b133fa81bbeab9c33ccc3d43c79 2023-07-18 02:15:08,438 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for eb303b133fa81bbeab9c33ccc3d43c79 2023-07-18 02:15:08,439 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened eb303b133fa81bbeab9c33ccc3d43c79; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10567041920, jitterRate=-0.015867531299591064}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 02:15:08,439 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for eb303b133fa81bbeab9c33ccc3d43c79: 2023-07-18 02:15:08,439 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveA,,1689646506653.eb303b133fa81bbeab9c33ccc3d43c79., pid=108, masterSystemTime=1689646508407 2023-07-18 02:15:08,441 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveA,,1689646506653.eb303b133fa81bbeab9c33ccc3d43c79. 2023-07-18 02:15:08,441 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveA,,1689646506653.eb303b133fa81bbeab9c33ccc3d43c79. 2023-07-18 02:15:08,441 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=104 updating hbase:meta row=eb303b133fa81bbeab9c33ccc3d43c79, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,35063,1689646489808 2023-07-18 02:15:08,441 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveA,,1689646506653.eb303b133fa81bbeab9c33ccc3d43c79.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689646508441"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689646508441"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689646508441"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689646508441"}]},"ts":"1689646508441"} 2023-07-18 02:15:08,444 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=108, resume processing ppid=104 2023-07-18 02:15:08,444 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=108, ppid=104, state=SUCCESS; OpenRegionProcedure eb303b133fa81bbeab9c33ccc3d43c79, server=jenkins-hbase4.apache.org,35063,1689646489808 in 187 msec 2023-07-18 02:15:08,445 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=104, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=eb303b133fa81bbeab9c33ccc3d43c79, REOPEN/MOVE in 532 msec 2023-07-18 02:15:08,861 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-18 02:15:08,916 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] procedure.ProcedureSyncWait(216): waitFor pid=103 2023-07-18 02:15:08,916 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(369): All regions from table(s) [GrouptestMultiTableMoveB, GrouptestMultiTableMoveA] moved to target group Group_testMultiTableMove_1393305614. 2023-07-18 02:15:08,916 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 02:15:08,919 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:08,919 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:08,921 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveA 2023-07-18 02:15:08,922 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 02:15:08,922 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveB 2023-07-18 02:15:08,922 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 02:15:08,923 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 02:15:08,923 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 02:15:08,924 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testMultiTableMove_1393305614 2023-07-18 02:15:08,924 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 02:15:08,925 INFO [Listener at localhost/38101] client.HBaseAdmin$15(890): Started disable of GrouptestMultiTableMoveA 2023-07-18 02:15:08,926 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable GrouptestMultiTableMoveA 2023-07-18 02:15:08,926 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] procedure2.ProcedureExecutor(1029): Stored pid=109, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=GrouptestMultiTableMoveA 2023-07-18 02:15:08,929 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(1230): Checking to see if procedure is done pid=109 2023-07-18 02:15:08,929 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689646508929"}]},"ts":"1689646508929"} 2023-07-18 02:15:08,930 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=DISABLING in hbase:meta 2023-07-18 02:15:08,932 INFO [PEWorker-5] procedure.DisableTableProcedure(293): Set GrouptestMultiTableMoveA to state=DISABLING 2023-07-18 02:15:08,932 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=110, ppid=109, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=eb303b133fa81bbeab9c33ccc3d43c79, UNASSIGN}] 2023-07-18 02:15:08,934 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=110, ppid=109, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=eb303b133fa81bbeab9c33ccc3d43c79, UNASSIGN 2023-07-18 02:15:08,935 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=110 updating hbase:meta row=eb303b133fa81bbeab9c33ccc3d43c79, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,35063,1689646489808 2023-07-18 02:15:08,935 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689646506653.eb303b133fa81bbeab9c33ccc3d43c79.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689646508935"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689646508935"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689646508935"}]},"ts":"1689646508935"} 2023-07-18 02:15:08,936 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=111, ppid=110, state=RUNNABLE; CloseRegionProcedure eb303b133fa81bbeab9c33ccc3d43c79, server=jenkins-hbase4.apache.org,35063,1689646489808}] 2023-07-18 02:15:09,030 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(1230): Checking to see if procedure is done pid=109 2023-07-18 02:15:09,088 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close eb303b133fa81bbeab9c33ccc3d43c79 2023-07-18 02:15:09,089 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing eb303b133fa81bbeab9c33ccc3d43c79, disabling compactions & flushes 2023-07-18 02:15:09,089 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689646506653.eb303b133fa81bbeab9c33ccc3d43c79. 2023-07-18 02:15:09,089 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689646506653.eb303b133fa81bbeab9c33ccc3d43c79. 2023-07-18 02:15:09,089 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689646506653.eb303b133fa81bbeab9c33ccc3d43c79. after waiting 0 ms 2023-07-18 02:15:09,089 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689646506653.eb303b133fa81bbeab9c33ccc3d43c79. 2023-07-18 02:15:09,093 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/GrouptestMultiTableMoveA/eb303b133fa81bbeab9c33ccc3d43c79/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-18 02:15:09,095 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689646506653.eb303b133fa81bbeab9c33ccc3d43c79. 2023-07-18 02:15:09,095 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for eb303b133fa81bbeab9c33ccc3d43c79: 2023-07-18 02:15:09,097 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed eb303b133fa81bbeab9c33ccc3d43c79 2023-07-18 02:15:09,097 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=110 updating hbase:meta row=eb303b133fa81bbeab9c33ccc3d43c79, regionState=CLOSED 2023-07-18 02:15:09,098 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689646506653.eb303b133fa81bbeab9c33ccc3d43c79.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689646509097"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689646509097"}]},"ts":"1689646509097"} 2023-07-18 02:15:09,100 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=111, resume processing ppid=110 2023-07-18 02:15:09,100 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=111, ppid=110, state=SUCCESS; CloseRegionProcedure eb303b133fa81bbeab9c33ccc3d43c79, server=jenkins-hbase4.apache.org,35063,1689646489808 in 163 msec 2023-07-18 02:15:09,102 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=110, resume processing ppid=109 2023-07-18 02:15:09,102 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=110, ppid=109, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=eb303b133fa81bbeab9c33ccc3d43c79, UNASSIGN in 168 msec 2023-07-18 02:15:09,103 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689646509102"}]},"ts":"1689646509102"} 2023-07-18 02:15:09,104 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=DISABLED in hbase:meta 2023-07-18 02:15:09,106 INFO [PEWorker-5] procedure.DisableTableProcedure(305): Set GrouptestMultiTableMoveA to state=DISABLED 2023-07-18 02:15:09,108 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=109, state=SUCCESS; DisableTableProcedure table=GrouptestMultiTableMoveA in 181 msec 2023-07-18 02:15:09,232 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(1230): Checking to see if procedure is done pid=109 2023-07-18 02:15:09,232 INFO [Listener at localhost/38101] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:GrouptestMultiTableMoveA, procId: 109 completed 2023-07-18 02:15:09,233 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete GrouptestMultiTableMoveA 2023-07-18 02:15:09,234 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] procedure2.ProcedureExecutor(1029): Stored pid=112, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-18 02:15:09,236 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=112, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-18 02:15:09,236 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'GrouptestMultiTableMoveA' from rsgroup 'Group_testMultiTableMove_1393305614' 2023-07-18 02:15:09,236 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=112, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-18 02:15:09,239 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1393305614 2023-07-18 02:15:09,239 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:09,240 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 02:15:09,241 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 02:15:09,241 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/GrouptestMultiTableMoveA/eb303b133fa81bbeab9c33ccc3d43c79 2023-07-18 02:15:09,243 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/GrouptestMultiTableMoveA/eb303b133fa81bbeab9c33ccc3d43c79/f, FileablePath, hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/GrouptestMultiTableMoveA/eb303b133fa81bbeab9c33ccc3d43c79/recovered.edits] 2023-07-18 02:15:09,243 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(1230): Checking to see if procedure is done pid=112 2023-07-18 02:15:09,249 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/GrouptestMultiTableMoveA/eb303b133fa81bbeab9c33ccc3d43c79/recovered.edits/7.seqid to hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/archive/data/default/GrouptestMultiTableMoveA/eb303b133fa81bbeab9c33ccc3d43c79/recovered.edits/7.seqid 2023-07-18 02:15:09,250 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/GrouptestMultiTableMoveA/eb303b133fa81bbeab9c33ccc3d43c79 2023-07-18 02:15:09,250 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveA regions 2023-07-18 02:15:09,252 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=112, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-18 02:15:09,254 WARN [PEWorker-3] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of GrouptestMultiTableMoveA from hbase:meta 2023-07-18 02:15:09,255 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(421): Removing 'GrouptestMultiTableMoveA' descriptor. 2023-07-18 02:15:09,256 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=112, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-18 02:15:09,256 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(411): Removing 'GrouptestMultiTableMoveA' from region states. 2023-07-18 02:15:09,256 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveA,,1689646506653.eb303b133fa81bbeab9c33ccc3d43c79.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689646509256"}]},"ts":"9223372036854775807"} 2023-07-18 02:15:09,258 INFO [PEWorker-3] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-18 02:15:09,258 DEBUG [PEWorker-3] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => eb303b133fa81bbeab9c33ccc3d43c79, NAME => 'GrouptestMultiTableMoveA,,1689646506653.eb303b133fa81bbeab9c33ccc3d43c79.', STARTKEY => '', ENDKEY => ''}] 2023-07-18 02:15:09,258 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(415): Marking 'GrouptestMultiTableMoveA' as deleted. 2023-07-18 02:15:09,258 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689646509258"}]},"ts":"9223372036854775807"} 2023-07-18 02:15:09,259 INFO [PEWorker-3] hbase.MetaTableAccessor(1658): Deleted table GrouptestMultiTableMoveA state from META 2023-07-18 02:15:09,261 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(130): Finished pid=112, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-18 02:15:09,262 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=112, state=SUCCESS; DeleteTableProcedure table=GrouptestMultiTableMoveA in 28 msec 2023-07-18 02:15:09,352 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(1230): Checking to see if procedure is done pid=112 2023-07-18 02:15:09,352 INFO [Listener at localhost/38101] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:GrouptestMultiTableMoveA, procId: 112 completed 2023-07-18 02:15:09,353 INFO [Listener at localhost/38101] client.HBaseAdmin$15(890): Started disable of GrouptestMultiTableMoveB 2023-07-18 02:15:09,353 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable GrouptestMultiTableMoveB 2023-07-18 02:15:09,354 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] procedure2.ProcedureExecutor(1029): Stored pid=113, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=GrouptestMultiTableMoveB 2023-07-18 02:15:09,357 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(1230): Checking to see if procedure is done pid=113 2023-07-18 02:15:09,357 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689646509357"}]},"ts":"1689646509357"} 2023-07-18 02:15:09,358 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=DISABLING in hbase:meta 2023-07-18 02:15:09,365 INFO [PEWorker-4] procedure.DisableTableProcedure(293): Set GrouptestMultiTableMoveB to state=DISABLING 2023-07-18 02:15:09,366 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=114, ppid=113, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=9300591cdd331b2c94f0c2611e778d0b, UNASSIGN}] 2023-07-18 02:15:09,368 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=114, ppid=113, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=9300591cdd331b2c94f0c2611e778d0b, UNASSIGN 2023-07-18 02:15:09,368 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=114 updating hbase:meta row=9300591cdd331b2c94f0c2611e778d0b, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,35063,1689646489808 2023-07-18 02:15:09,368 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689646507270.9300591cdd331b2c94f0c2611e778d0b.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689646509368"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689646509368"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689646509368"}]},"ts":"1689646509368"} 2023-07-18 02:15:09,370 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=115, ppid=114, state=RUNNABLE; CloseRegionProcedure 9300591cdd331b2c94f0c2611e778d0b, server=jenkins-hbase4.apache.org,35063,1689646489808}] 2023-07-18 02:15:09,458 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(1230): Checking to see if procedure is done pid=113 2023-07-18 02:15:09,522 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 9300591cdd331b2c94f0c2611e778d0b 2023-07-18 02:15:09,523 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 9300591cdd331b2c94f0c2611e778d0b, disabling compactions & flushes 2023-07-18 02:15:09,523 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689646507270.9300591cdd331b2c94f0c2611e778d0b. 2023-07-18 02:15:09,523 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689646507270.9300591cdd331b2c94f0c2611e778d0b. 2023-07-18 02:15:09,523 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689646507270.9300591cdd331b2c94f0c2611e778d0b. after waiting 0 ms 2023-07-18 02:15:09,523 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689646507270.9300591cdd331b2c94f0c2611e778d0b. 2023-07-18 02:15:09,527 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/GrouptestMultiTableMoveB/9300591cdd331b2c94f0c2611e778d0b/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-18 02:15:09,529 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689646507270.9300591cdd331b2c94f0c2611e778d0b. 2023-07-18 02:15:09,529 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 9300591cdd331b2c94f0c2611e778d0b: 2023-07-18 02:15:09,530 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 9300591cdd331b2c94f0c2611e778d0b 2023-07-18 02:15:09,531 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=114 updating hbase:meta row=9300591cdd331b2c94f0c2611e778d0b, regionState=CLOSED 2023-07-18 02:15:09,531 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689646507270.9300591cdd331b2c94f0c2611e778d0b.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689646509531"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689646509531"}]},"ts":"1689646509531"} 2023-07-18 02:15:09,534 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=115, resume processing ppid=114 2023-07-18 02:15:09,534 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=115, ppid=114, state=SUCCESS; CloseRegionProcedure 9300591cdd331b2c94f0c2611e778d0b, server=jenkins-hbase4.apache.org,35063,1689646489808 in 162 msec 2023-07-18 02:15:09,535 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=114, resume processing ppid=113 2023-07-18 02:15:09,535 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=114, ppid=113, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=9300591cdd331b2c94f0c2611e778d0b, UNASSIGN in 168 msec 2023-07-18 02:15:09,536 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689646509536"}]},"ts":"1689646509536"} 2023-07-18 02:15:09,539 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=DISABLED in hbase:meta 2023-07-18 02:15:09,541 INFO [PEWorker-4] procedure.DisableTableProcedure(305): Set GrouptestMultiTableMoveB to state=DISABLED 2023-07-18 02:15:09,543 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=113, state=SUCCESS; DisableTableProcedure table=GrouptestMultiTableMoveB in 188 msec 2023-07-18 02:15:09,659 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(1230): Checking to see if procedure is done pid=113 2023-07-18 02:15:09,660 INFO [Listener at localhost/38101] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:GrouptestMultiTableMoveB, procId: 113 completed 2023-07-18 02:15:09,661 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete GrouptestMultiTableMoveB 2023-07-18 02:15:09,661 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] procedure2.ProcedureExecutor(1029): Stored pid=116, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-18 02:15:09,664 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=116, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-18 02:15:09,664 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'GrouptestMultiTableMoveB' from rsgroup 'Group_testMultiTableMove_1393305614' 2023-07-18 02:15:09,665 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=116, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-18 02:15:09,667 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1393305614 2023-07-18 02:15:09,667 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:09,668 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 02:15:09,670 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/GrouptestMultiTableMoveB/9300591cdd331b2c94f0c2611e778d0b 2023-07-18 02:15:09,672 DEBUG [HFileArchiver-7] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/GrouptestMultiTableMoveB/9300591cdd331b2c94f0c2611e778d0b/f, FileablePath, hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/GrouptestMultiTableMoveB/9300591cdd331b2c94f0c2611e778d0b/recovered.edits] 2023-07-18 02:15:09,672 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 02:15:09,675 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(1230): Checking to see if procedure is done pid=116 2023-07-18 02:15:09,679 DEBUG [HFileArchiver-7] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/GrouptestMultiTableMoveB/9300591cdd331b2c94f0c2611e778d0b/recovered.edits/7.seqid to hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/archive/data/default/GrouptestMultiTableMoveB/9300591cdd331b2c94f0c2611e778d0b/recovered.edits/7.seqid 2023-07-18 02:15:09,679 DEBUG [HFileArchiver-7] backup.HFileArchiver(596): Deleted hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/GrouptestMultiTableMoveB/9300591cdd331b2c94f0c2611e778d0b 2023-07-18 02:15:09,679 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveB regions 2023-07-18 02:15:09,683 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=116, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-18 02:15:09,685 WARN [PEWorker-2] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of GrouptestMultiTableMoveB from hbase:meta 2023-07-18 02:15:09,694 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(421): Removing 'GrouptestMultiTableMoveB' descriptor. 2023-07-18 02:15:09,695 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=116, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-18 02:15:09,696 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(411): Removing 'GrouptestMultiTableMoveB' from region states. 2023-07-18 02:15:09,696 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveB,,1689646507270.9300591cdd331b2c94f0c2611e778d0b.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689646509696"}]},"ts":"9223372036854775807"} 2023-07-18 02:15:09,700 INFO [PEWorker-2] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-18 02:15:09,700 DEBUG [PEWorker-2] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 9300591cdd331b2c94f0c2611e778d0b, NAME => 'GrouptestMultiTableMoveB,,1689646507270.9300591cdd331b2c94f0c2611e778d0b.', STARTKEY => '', ENDKEY => ''}] 2023-07-18 02:15:09,700 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(415): Marking 'GrouptestMultiTableMoveB' as deleted. 2023-07-18 02:15:09,700 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689646509700"}]},"ts":"9223372036854775807"} 2023-07-18 02:15:09,702 INFO [PEWorker-2] hbase.MetaTableAccessor(1658): Deleted table GrouptestMultiTableMoveB state from META 2023-07-18 02:15:09,704 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(130): Finished pid=116, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-18 02:15:09,705 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=116, state=SUCCESS; DeleteTableProcedure table=GrouptestMultiTableMoveB in 43 msec 2023-07-18 02:15:09,776 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(1230): Checking to see if procedure is done pid=116 2023-07-18 02:15:09,776 INFO [Listener at localhost/38101] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:GrouptestMultiTableMoveB, procId: 116 completed 2023-07-18 02:15:09,780 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:09,780 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:09,781 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 02:15:09,781 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 02:15:09,781 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 02:15:09,781 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:35063] to rsgroup default 2023-07-18 02:15:09,784 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1393305614 2023-07-18 02:15:09,784 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:09,784 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 02:15:09,785 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 02:15:09,788 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testMultiTableMove_1393305614, current retry=0 2023-07-18 02:15:09,788 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,35063,1689646489808] are moved back to Group_testMultiTableMove_1393305614 2023-07-18 02:15:09,788 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testMultiTableMove_1393305614 => default 2023-07-18 02:15:09,788 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 02:15:09,788 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testMultiTableMove_1393305614 2023-07-18 02:15:09,792 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:09,792 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 02:15:09,792 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-18 02:15:09,793 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 02:15:09,794 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 02:15:09,794 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 02:15:09,794 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 02:15:09,795 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 02:15:09,795 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 02:15:09,796 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 02:15:09,799 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:09,799 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 02:15:09,800 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 02:15:09,803 INFO [Listener at localhost/38101] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 02:15:09,803 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 02:15:09,805 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:09,806 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 02:15:09,807 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 02:15:09,809 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 02:15:09,811 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:09,812 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:09,813 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40909] to rsgroup master 2023-07-18 02:15:09,813 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40909 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 02:15:09,814 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] ipc.CallRunner(144): callId: 509 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:39122 deadline: 1689647709813, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40909 is either offline or it does not exist. 2023-07-18 02:15:09,814 WARN [Listener at localhost/38101] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40909 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40909 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 02:15:09,815 INFO [Listener at localhost/38101] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 02:15:09,816 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:09,816 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:09,817 INFO [Listener at localhost/38101] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35063, jenkins-hbase4.apache.org:39557, jenkins-hbase4.apache.org:43645, jenkins-hbase4.apache.org:45077], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 02:15:09,817 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 02:15:09,817 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 02:15:09,838 INFO [Listener at localhost/38101] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testMultiTableMove Thread=519 (was 521), OpenFileDescriptor=813 (was 819), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=447 (was 399) - SystemLoadAverage LEAK? -, ProcessCount=170 (was 172), AvailableMemoryMB=4680 (was 2686) - AvailableMemoryMB LEAK? - 2023-07-18 02:15:09,838 WARN [Listener at localhost/38101] hbase.ResourceChecker(130): Thread=519 is superior to 500 2023-07-18 02:15:09,855 INFO [Listener at localhost/38101] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRenameRSGroupConstraints Thread=519, OpenFileDescriptor=813, MaxFileDescriptor=60000, SystemLoadAverage=447, ProcessCount=170, AvailableMemoryMB=4679 2023-07-18 02:15:09,855 WARN [Listener at localhost/38101] hbase.ResourceChecker(130): Thread=519 is superior to 500 2023-07-18 02:15:09,855 INFO [Listener at localhost/38101] rsgroup.TestRSGroupsBase(132): testRenameRSGroupConstraints 2023-07-18 02:15:09,858 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:09,859 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:09,859 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 02:15:09,859 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 02:15:09,860 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 02:15:09,860 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 02:15:09,860 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 02:15:09,861 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 02:15:09,864 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:09,864 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 02:15:09,866 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 02:15:09,868 INFO [Listener at localhost/38101] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 02:15:09,868 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 02:15:09,870 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:09,870 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 02:15:09,872 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 02:15:09,877 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 02:15:09,879 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:09,879 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:09,881 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40909] to rsgroup master 2023-07-18 02:15:09,881 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40909 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 02:15:09,881 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] ipc.CallRunner(144): callId: 537 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:39122 deadline: 1689647709881, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40909 is either offline or it does not exist. 2023-07-18 02:15:09,881 WARN [Listener at localhost/38101] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40909 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40909 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 02:15:09,883 INFO [Listener at localhost/38101] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 02:15:09,884 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:09,884 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:09,884 INFO [Listener at localhost/38101] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35063, jenkins-hbase4.apache.org:39557, jenkins-hbase4.apache.org:43645, jenkins-hbase4.apache.org:45077], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 02:15:09,885 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 02:15:09,885 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 02:15:09,886 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 02:15:09,886 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 02:15:09,886 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup oldGroup 2023-07-18 02:15:09,888 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:09,889 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-18 02:15:09,890 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 02:15:09,890 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 02:15:09,891 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 02:15:09,894 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:09,894 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:09,895 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:35063, jenkins-hbase4.apache.org:39557] to rsgroup oldGroup 2023-07-18 02:15:09,897 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:09,898 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-18 02:15:09,898 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 02:15:09,898 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 02:15:09,900 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-18 02:15:09,900 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,35063,1689646489808, jenkins-hbase4.apache.org,39557,1689646489998] are moved back to default 2023-07-18 02:15:09,900 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(438): Move servers done: default => oldGroup 2023-07-18 02:15:09,900 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 02:15:09,902 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:09,902 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:09,904 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldGroup 2023-07-18 02:15:09,904 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 02:15:09,905 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldGroup 2023-07-18 02:15:09,905 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 02:15:09,906 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 02:15:09,906 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 02:15:09,906 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup anotherRSGroup 2023-07-18 02:15:09,908 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:09,908 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-18 02:15:09,910 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-18 02:15:09,911 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 02:15:09,911 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-18 02:15:09,913 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 02:15:09,915 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:09,915 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:09,917 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43645] to rsgroup anotherRSGroup 2023-07-18 02:15:09,919 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:09,919 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-18 02:15:09,920 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-18 02:15:09,920 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 02:15:09,920 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-18 02:15:09,921 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-18 02:15:09,921 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,43645,1689646493716] are moved back to default 2023-07-18 02:15:09,922 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(438): Move servers done: default => anotherRSGroup 2023-07-18 02:15:09,922 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 02:15:09,923 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:09,924 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:09,926 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=anotherRSGroup 2023-07-18 02:15:09,926 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 02:15:09,927 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=anotherRSGroup 2023-07-18 02:15:09,927 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 02:15:09,932 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from nonExistingRSGroup to newRSGroup1 2023-07-18 02:15:09,932 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup nonExistingRSGroup does not exist at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:407) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 02:15:09,932 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] ipc.CallRunner(144): callId: 571 service: MasterService methodName: ExecMasterService size: 113 connection: 172.31.14.131:39122 deadline: 1689647709931, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup nonExistingRSGroup does not exist 2023-07-18 02:15:09,933 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldGroup to anotherRSGroup 2023-07-18 02:15:09,934 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: anotherRSGroup at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:410) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 02:15:09,934 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] ipc.CallRunner(144): callId: 573 service: MasterService methodName: ExecMasterService size: 106 connection: 172.31.14.131:39122 deadline: 1689647709933, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: anotherRSGroup 2023-07-18 02:15:09,934 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from default to newRSGroup2 2023-07-18 02:15:09,935 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Can't rename default rsgroup at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:403) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 02:15:09,935 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] ipc.CallRunner(144): callId: 575 service: MasterService methodName: ExecMasterService size: 102 connection: 172.31.14.131:39122 deadline: 1689647709934, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Can't rename default rsgroup 2023-07-18 02:15:09,935 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldGroup to default 2023-07-18 02:15:09,936 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: default at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:410) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 02:15:09,936 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] ipc.CallRunner(144): callId: 577 service: MasterService methodName: ExecMasterService size: 99 connection: 172.31.14.131:39122 deadline: 1689647709935, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: default 2023-07-18 02:15:09,939 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:09,939 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:09,941 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 02:15:09,941 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 02:15:09,941 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 02:15:09,942 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43645] to rsgroup default 2023-07-18 02:15:09,945 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:09,946 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-18 02:15:09,947 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-18 02:15:09,947 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 02:15:09,948 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-18 02:15:09,953 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group anotherRSGroup, current retry=0 2023-07-18 02:15:09,953 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,43645,1689646493716] are moved back to anotherRSGroup 2023-07-18 02:15:09,953 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(438): Move servers done: anotherRSGroup => default 2023-07-18 02:15:09,953 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 02:15:09,954 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup anotherRSGroup 2023-07-18 02:15:09,959 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:09,959 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-18 02:15:09,960 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 02:15:09,961 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-18 02:15:09,963 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 02:15:09,965 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 02:15:09,965 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 02:15:09,965 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 02:15:09,966 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:35063, jenkins-hbase4.apache.org:39557] to rsgroup default 2023-07-18 02:15:09,968 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:09,969 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-18 02:15:09,969 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 02:15:09,969 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 02:15:09,971 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group oldGroup, current retry=0 2023-07-18 02:15:09,971 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,35063,1689646489808, jenkins-hbase4.apache.org,39557,1689646489998] are moved back to oldGroup 2023-07-18 02:15:09,971 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(438): Move servers done: oldGroup => default 2023-07-18 02:15:09,971 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 02:15:09,972 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup oldGroup 2023-07-18 02:15:09,975 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:09,975 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 02:15:09,976 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-18 02:15:09,977 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 02:15:09,978 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 02:15:09,978 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 02:15:09,978 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 02:15:09,979 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 02:15:09,979 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 02:15:09,980 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 02:15:09,983 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:09,984 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 02:15:09,986 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 02:15:09,989 INFO [Listener at localhost/38101] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 02:15:09,990 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 02:15:09,991 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:09,992 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 02:15:09,993 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 02:15:09,994 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 02:15:09,997 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:09,997 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:09,999 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40909] to rsgroup master 2023-07-18 02:15:09,999 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40909 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 02:15:09,999 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] ipc.CallRunner(144): callId: 613 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:39122 deadline: 1689647709999, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40909 is either offline or it does not exist. 2023-07-18 02:15:10,000 WARN [Listener at localhost/38101] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40909 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40909 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 02:15:10,002 INFO [Listener at localhost/38101] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 02:15:10,003 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:10,003 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:10,003 INFO [Listener at localhost/38101] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35063, jenkins-hbase4.apache.org:39557, jenkins-hbase4.apache.org:43645, jenkins-hbase4.apache.org:45077], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 02:15:10,004 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 02:15:10,004 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 02:15:10,025 INFO [Listener at localhost/38101] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRenameRSGroupConstraints Thread=523 (was 519) Potentially hanging thread: hconnection-0x2e79eb29-shared-pool-19 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2e79eb29-shared-pool-17 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2e79eb29-shared-pool-20 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2e79eb29-shared-pool-18 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=813 (was 813), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=447 (was 447), ProcessCount=170 (was 170), AvailableMemoryMB=4679 (was 4679) 2023-07-18 02:15:10,025 WARN [Listener at localhost/38101] hbase.ResourceChecker(130): Thread=523 is superior to 500 2023-07-18 02:15:10,045 INFO [Listener at localhost/38101] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRenameRSGroup Thread=523, OpenFileDescriptor=813, MaxFileDescriptor=60000, SystemLoadAverage=447, ProcessCount=170, AvailableMemoryMB=4678 2023-07-18 02:15:10,045 WARN [Listener at localhost/38101] hbase.ResourceChecker(130): Thread=523 is superior to 500 2023-07-18 02:15:10,045 INFO [Listener at localhost/38101] rsgroup.TestRSGroupsBase(132): testRenameRSGroup 2023-07-18 02:15:10,049 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:10,049 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:10,050 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 02:15:10,050 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 02:15:10,050 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 02:15:10,051 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 02:15:10,051 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 02:15:10,052 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 02:15:10,056 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:10,056 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 02:15:10,057 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 02:15:10,060 INFO [Listener at localhost/38101] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 02:15:10,060 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 02:15:10,062 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:10,062 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 02:15:10,065 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 02:15:10,072 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 02:15:10,075 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:10,075 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:10,076 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40909] to rsgroup master 2023-07-18 02:15:10,077 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40909 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 02:15:10,077 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] ipc.CallRunner(144): callId: 641 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:39122 deadline: 1689647710076, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40909 is either offline or it does not exist. 2023-07-18 02:15:10,077 WARN [Listener at localhost/38101] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40909 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40909 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 02:15:10,079 INFO [Listener at localhost/38101] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 02:15:10,079 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:10,079 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:10,079 INFO [Listener at localhost/38101] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35063, jenkins-hbase4.apache.org:39557, jenkins-hbase4.apache.org:43645, jenkins-hbase4.apache.org:45077], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 02:15:10,080 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 02:15:10,080 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 02:15:10,081 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 02:15:10,081 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 02:15:10,082 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup oldgroup 2023-07-18 02:15:10,083 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-18 02:15:10,085 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:10,085 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 02:15:10,085 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 02:15:10,087 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 02:15:10,089 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:10,090 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:10,092 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:35063, jenkins-hbase4.apache.org:39557] to rsgroup oldgroup 2023-07-18 02:15:10,093 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-18 02:15:10,094 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:10,094 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 02:15:10,094 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 02:15:10,096 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-18 02:15:10,096 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,35063,1689646489808, jenkins-hbase4.apache.org,39557,1689646489998] are moved back to default 2023-07-18 02:15:10,096 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(438): Move servers done: default => oldgroup 2023-07-18 02:15:10,096 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 02:15:10,098 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:10,098 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:10,100 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldgroup 2023-07-18 02:15:10,100 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 02:15:10,102 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'testRename', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'tr', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 02:15:10,102 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] procedure2.ProcedureExecutor(1029): Stored pid=117, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=testRename 2023-07-18 02:15:10,104 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 02:15:10,105 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "testRename" procId is: 117 2023-07-18 02:15:10,105 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(1230): Checking to see if procedure is done pid=117 2023-07-18 02:15:10,106 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-18 02:15:10,107 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:10,107 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 02:15:10,107 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 02:15:10,109 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-18 02:15:10,110 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/testRename/bbf71cfacd6e4740d14aa9af8f240c8d 2023-07-18 02:15:10,111 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/testRename/bbf71cfacd6e4740d14aa9af8f240c8d empty. 2023-07-18 02:15:10,111 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/testRename/bbf71cfacd6e4740d14aa9af8f240c8d 2023-07-18 02:15:10,112 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived testRename regions 2023-07-18 02:15:10,126 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/testRename/.tabledesc/.tableinfo.0000000001 2023-07-18 02:15:10,127 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(7675): creating {ENCODED => bbf71cfacd6e4740d14aa9af8f240c8d, NAME => 'testRename,,1689646510101.bbf71cfacd6e4740d14aa9af8f240c8d.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='testRename', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'tr', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp 2023-07-18 02:15:10,138 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(866): Instantiated testRename,,1689646510101.bbf71cfacd6e4740d14aa9af8f240c8d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 02:15:10,138 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1604): Closing bbf71cfacd6e4740d14aa9af8f240c8d, disabling compactions & flushes 2023-07-18 02:15:10,139 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1626): Closing region testRename,,1689646510101.bbf71cfacd6e4740d14aa9af8f240c8d. 2023-07-18 02:15:10,139 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689646510101.bbf71cfacd6e4740d14aa9af8f240c8d. 2023-07-18 02:15:10,139 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689646510101.bbf71cfacd6e4740d14aa9af8f240c8d. after waiting 0 ms 2023-07-18 02:15:10,139 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689646510101.bbf71cfacd6e4740d14aa9af8f240c8d. 2023-07-18 02:15:10,139 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1838): Closed testRename,,1689646510101.bbf71cfacd6e4740d14aa9af8f240c8d. 2023-07-18 02:15:10,139 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1558): Region close journal for bbf71cfacd6e4740d14aa9af8f240c8d: 2023-07-18 02:15:10,141 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_ADD_TO_META 2023-07-18 02:15:10,142 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"testRename,,1689646510101.bbf71cfacd6e4740d14aa9af8f240c8d.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689646510142"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689646510142"}]},"ts":"1689646510142"} 2023-07-18 02:15:10,143 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-18 02:15:10,144 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-18 02:15:10,144 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"testRename","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689646510144"}]},"ts":"1689646510144"} 2023-07-18 02:15:10,145 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=testRename, state=ENABLING in hbase:meta 2023-07-18 02:15:10,149 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 02:15:10,149 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 02:15:10,149 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 02:15:10,149 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 02:15:10,150 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=118, ppid=117, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=testRename, region=bbf71cfacd6e4740d14aa9af8f240c8d, ASSIGN}] 2023-07-18 02:15:10,151 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=118, ppid=117, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=testRename, region=bbf71cfacd6e4740d14aa9af8f240c8d, ASSIGN 2023-07-18 02:15:10,152 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=118, ppid=117, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=bbf71cfacd6e4740d14aa9af8f240c8d, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43645,1689646493716; forceNewPlan=false, retain=false 2023-07-18 02:15:10,206 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(1230): Checking to see if procedure is done pid=117 2023-07-18 02:15:10,302 INFO [jenkins-hbase4:40909] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-18 02:15:10,304 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=118 updating hbase:meta row=bbf71cfacd6e4740d14aa9af8f240c8d, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43645,1689646493716 2023-07-18 02:15:10,304 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689646510101.bbf71cfacd6e4740d14aa9af8f240c8d.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689646510304"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689646510304"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689646510304"}]},"ts":"1689646510304"} 2023-07-18 02:15:10,306 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=119, ppid=118, state=RUNNABLE; OpenRegionProcedure bbf71cfacd6e4740d14aa9af8f240c8d, server=jenkins-hbase4.apache.org,43645,1689646493716}] 2023-07-18 02:15:10,408 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(1230): Checking to see if procedure is done pid=117 2023-07-18 02:15:10,461 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1689646510101.bbf71cfacd6e4740d14aa9af8f240c8d. 2023-07-18 02:15:10,461 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => bbf71cfacd6e4740d14aa9af8f240c8d, NAME => 'testRename,,1689646510101.bbf71cfacd6e4740d14aa9af8f240c8d.', STARTKEY => '', ENDKEY => ''} 2023-07-18 02:15:10,462 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename bbf71cfacd6e4740d14aa9af8f240c8d 2023-07-18 02:15:10,462 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1689646510101.bbf71cfacd6e4740d14aa9af8f240c8d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 02:15:10,462 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for bbf71cfacd6e4740d14aa9af8f240c8d 2023-07-18 02:15:10,462 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for bbf71cfacd6e4740d14aa9af8f240c8d 2023-07-18 02:15:10,464 INFO [StoreOpener-bbf71cfacd6e4740d14aa9af8f240c8d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region bbf71cfacd6e4740d14aa9af8f240c8d 2023-07-18 02:15:10,465 DEBUG [StoreOpener-bbf71cfacd6e4740d14aa9af8f240c8d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/testRename/bbf71cfacd6e4740d14aa9af8f240c8d/tr 2023-07-18 02:15:10,466 DEBUG [StoreOpener-bbf71cfacd6e4740d14aa9af8f240c8d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/testRename/bbf71cfacd6e4740d14aa9af8f240c8d/tr 2023-07-18 02:15:10,466 INFO [StoreOpener-bbf71cfacd6e4740d14aa9af8f240c8d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region bbf71cfacd6e4740d14aa9af8f240c8d columnFamilyName tr 2023-07-18 02:15:10,467 INFO [StoreOpener-bbf71cfacd6e4740d14aa9af8f240c8d-1] regionserver.HStore(310): Store=bbf71cfacd6e4740d14aa9af8f240c8d/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 02:15:10,468 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/testRename/bbf71cfacd6e4740d14aa9af8f240c8d 2023-07-18 02:15:10,468 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/testRename/bbf71cfacd6e4740d14aa9af8f240c8d 2023-07-18 02:15:10,477 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for bbf71cfacd6e4740d14aa9af8f240c8d 2023-07-18 02:15:10,479 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/testRename/bbf71cfacd6e4740d14aa9af8f240c8d/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 02:15:10,480 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened bbf71cfacd6e4740d14aa9af8f240c8d; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11965234080, jitterRate=0.11434926092624664}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 02:15:10,480 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for bbf71cfacd6e4740d14aa9af8f240c8d: 2023-07-18 02:15:10,480 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689646510101.bbf71cfacd6e4740d14aa9af8f240c8d., pid=119, masterSystemTime=1689646510457 2023-07-18 02:15:10,482 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689646510101.bbf71cfacd6e4740d14aa9af8f240c8d. 2023-07-18 02:15:10,482 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689646510101.bbf71cfacd6e4740d14aa9af8f240c8d. 2023-07-18 02:15:10,483 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=118 updating hbase:meta row=bbf71cfacd6e4740d14aa9af8f240c8d, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43645,1689646493716 2023-07-18 02:15:10,483 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689646510101.bbf71cfacd6e4740d14aa9af8f240c8d.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689646510482"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689646510482"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689646510482"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689646510482"}]},"ts":"1689646510482"} 2023-07-18 02:15:10,486 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=119, resume processing ppid=118 2023-07-18 02:15:10,486 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=119, ppid=118, state=SUCCESS; OpenRegionProcedure bbf71cfacd6e4740d14aa9af8f240c8d, server=jenkins-hbase4.apache.org,43645,1689646493716 in 179 msec 2023-07-18 02:15:10,488 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=118, resume processing ppid=117 2023-07-18 02:15:10,489 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=118, ppid=117, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=bbf71cfacd6e4740d14aa9af8f240c8d, ASSIGN in 337 msec 2023-07-18 02:15:10,489 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-18 02:15:10,490 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"testRename","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689646510489"}]},"ts":"1689646510489"} 2023-07-18 02:15:10,491 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=testRename, state=ENABLED in hbase:meta 2023-07-18 02:15:10,493 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_POST_OPERATION 2023-07-18 02:15:10,495 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=117, state=SUCCESS; CreateTableProcedure table=testRename in 391 msec 2023-07-18 02:15:10,709 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(1230): Checking to see if procedure is done pid=117 2023-07-18 02:15:10,709 INFO [Listener at localhost/38101] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:testRename, procId: 117 completed 2023-07-18 02:15:10,710 DEBUG [Listener at localhost/38101] hbase.HBaseTestingUtility(3430): Waiting until all regions of table testRename get assigned. Timeout = 60000ms 2023-07-18 02:15:10,710 INFO [Listener at localhost/38101] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 02:15:10,714 INFO [Listener at localhost/38101] hbase.HBaseTestingUtility(3484): All regions for table testRename assigned to meta. Checking AM states. 2023-07-18 02:15:10,714 INFO [Listener at localhost/38101] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 02:15:10,714 INFO [Listener at localhost/38101] hbase.HBaseTestingUtility(3504): All regions for table testRename assigned. 2023-07-18 02:15:10,717 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [testRename] to rsgroup oldgroup 2023-07-18 02:15:10,719 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-18 02:15:10,720 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:10,720 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 02:15:10,721 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 02:15:10,722 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(339): Moving region(s) for table testRename to RSGroup oldgroup 2023-07-18 02:15:10,722 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(345): Moving region bbf71cfacd6e4740d14aa9af8f240c8d to RSGroup oldgroup 2023-07-18 02:15:10,722 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 02:15:10,723 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 02:15:10,723 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 02:15:10,723 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 02:15:10,723 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 02:15:10,724 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] procedure2.ProcedureExecutor(1029): Stored pid=120, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=bbf71cfacd6e4740d14aa9af8f240c8d, REOPEN/MOVE 2023-07-18 02:15:10,724 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group oldgroup, current retry=0 2023-07-18 02:15:10,724 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=120, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=bbf71cfacd6e4740d14aa9af8f240c8d, REOPEN/MOVE 2023-07-18 02:15:10,725 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=bbf71cfacd6e4740d14aa9af8f240c8d, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43645,1689646493716 2023-07-18 02:15:10,725 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689646510101.bbf71cfacd6e4740d14aa9af8f240c8d.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689646510725"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689646510725"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689646510725"}]},"ts":"1689646510725"} 2023-07-18 02:15:10,726 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=121, ppid=120, state=RUNNABLE; CloseRegionProcedure bbf71cfacd6e4740d14aa9af8f240c8d, server=jenkins-hbase4.apache.org,43645,1689646493716}] 2023-07-18 02:15:10,880 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close bbf71cfacd6e4740d14aa9af8f240c8d 2023-07-18 02:15:10,881 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing bbf71cfacd6e4740d14aa9af8f240c8d, disabling compactions & flushes 2023-07-18 02:15:10,881 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1689646510101.bbf71cfacd6e4740d14aa9af8f240c8d. 2023-07-18 02:15:10,881 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689646510101.bbf71cfacd6e4740d14aa9af8f240c8d. 2023-07-18 02:15:10,881 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689646510101.bbf71cfacd6e4740d14aa9af8f240c8d. after waiting 0 ms 2023-07-18 02:15:10,881 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689646510101.bbf71cfacd6e4740d14aa9af8f240c8d. 2023-07-18 02:15:10,886 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/testRename/bbf71cfacd6e4740d14aa9af8f240c8d/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 02:15:10,886 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1689646510101.bbf71cfacd6e4740d14aa9af8f240c8d. 2023-07-18 02:15:10,887 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for bbf71cfacd6e4740d14aa9af8f240c8d: 2023-07-18 02:15:10,887 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding bbf71cfacd6e4740d14aa9af8f240c8d move to jenkins-hbase4.apache.org,35063,1689646489808 record at close sequenceid=2 2023-07-18 02:15:10,888 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed bbf71cfacd6e4740d14aa9af8f240c8d 2023-07-18 02:15:10,889 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=bbf71cfacd6e4740d14aa9af8f240c8d, regionState=CLOSED 2023-07-18 02:15:10,889 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"testRename,,1689646510101.bbf71cfacd6e4740d14aa9af8f240c8d.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689646510889"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689646510889"}]},"ts":"1689646510889"} 2023-07-18 02:15:10,892 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=121, resume processing ppid=120 2023-07-18 02:15:10,892 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=121, ppid=120, state=SUCCESS; CloseRegionProcedure bbf71cfacd6e4740d14aa9af8f240c8d, server=jenkins-hbase4.apache.org,43645,1689646493716 in 165 msec 2023-07-18 02:15:10,893 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=120, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=bbf71cfacd6e4740d14aa9af8f240c8d, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,35063,1689646489808; forceNewPlan=false, retain=false 2023-07-18 02:15:11,043 INFO [jenkins-hbase4:40909] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-18 02:15:11,044 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=bbf71cfacd6e4740d14aa9af8f240c8d, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,35063,1689646489808 2023-07-18 02:15:11,044 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689646510101.bbf71cfacd6e4740d14aa9af8f240c8d.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689646511044"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689646511044"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689646511044"}]},"ts":"1689646511044"} 2023-07-18 02:15:11,046 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=122, ppid=120, state=RUNNABLE; OpenRegionProcedure bbf71cfacd6e4740d14aa9af8f240c8d, server=jenkins-hbase4.apache.org,35063,1689646489808}] 2023-07-18 02:15:11,202 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1689646510101.bbf71cfacd6e4740d14aa9af8f240c8d. 2023-07-18 02:15:11,202 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => bbf71cfacd6e4740d14aa9af8f240c8d, NAME => 'testRename,,1689646510101.bbf71cfacd6e4740d14aa9af8f240c8d.', STARTKEY => '', ENDKEY => ''} 2023-07-18 02:15:11,202 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename bbf71cfacd6e4740d14aa9af8f240c8d 2023-07-18 02:15:11,202 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1689646510101.bbf71cfacd6e4740d14aa9af8f240c8d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 02:15:11,202 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for bbf71cfacd6e4740d14aa9af8f240c8d 2023-07-18 02:15:11,202 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for bbf71cfacd6e4740d14aa9af8f240c8d 2023-07-18 02:15:11,204 INFO [StoreOpener-bbf71cfacd6e4740d14aa9af8f240c8d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region bbf71cfacd6e4740d14aa9af8f240c8d 2023-07-18 02:15:11,205 DEBUG [StoreOpener-bbf71cfacd6e4740d14aa9af8f240c8d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/testRename/bbf71cfacd6e4740d14aa9af8f240c8d/tr 2023-07-18 02:15:11,205 DEBUG [StoreOpener-bbf71cfacd6e4740d14aa9af8f240c8d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/testRename/bbf71cfacd6e4740d14aa9af8f240c8d/tr 2023-07-18 02:15:11,205 INFO [StoreOpener-bbf71cfacd6e4740d14aa9af8f240c8d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region bbf71cfacd6e4740d14aa9af8f240c8d columnFamilyName tr 2023-07-18 02:15:11,206 INFO [StoreOpener-bbf71cfacd6e4740d14aa9af8f240c8d-1] regionserver.HStore(310): Store=bbf71cfacd6e4740d14aa9af8f240c8d/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 02:15:11,207 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/testRename/bbf71cfacd6e4740d14aa9af8f240c8d 2023-07-18 02:15:11,208 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/testRename/bbf71cfacd6e4740d14aa9af8f240c8d 2023-07-18 02:15:11,211 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for bbf71cfacd6e4740d14aa9af8f240c8d 2023-07-18 02:15:11,212 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened bbf71cfacd6e4740d14aa9af8f240c8d; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10605162400, jitterRate=-0.012317284941673279}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 02:15:11,212 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for bbf71cfacd6e4740d14aa9af8f240c8d: 2023-07-18 02:15:11,213 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689646510101.bbf71cfacd6e4740d14aa9af8f240c8d., pid=122, masterSystemTime=1689646511197 2023-07-18 02:15:11,215 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689646510101.bbf71cfacd6e4740d14aa9af8f240c8d. 2023-07-18 02:15:11,215 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689646510101.bbf71cfacd6e4740d14aa9af8f240c8d. 2023-07-18 02:15:11,215 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=bbf71cfacd6e4740d14aa9af8f240c8d, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,35063,1689646489808 2023-07-18 02:15:11,215 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689646510101.bbf71cfacd6e4740d14aa9af8f240c8d.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689646511215"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689646511215"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689646511215"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689646511215"}]},"ts":"1689646511215"} 2023-07-18 02:15:11,219 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=122, resume processing ppid=120 2023-07-18 02:15:11,219 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=122, ppid=120, state=SUCCESS; OpenRegionProcedure bbf71cfacd6e4740d14aa9af8f240c8d, server=jenkins-hbase4.apache.org,35063,1689646489808 in 171 msec 2023-07-18 02:15:11,220 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=120, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=bbf71cfacd6e4740d14aa9af8f240c8d, REOPEN/MOVE in 496 msec 2023-07-18 02:15:11,724 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] procedure.ProcedureSyncWait(216): waitFor pid=120 2023-07-18 02:15:11,724 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(369): All regions from table(s) [testRename] moved to target group oldgroup. 2023-07-18 02:15:11,724 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 02:15:11,727 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:11,727 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:11,729 INFO [Listener at localhost/38101] hbase.Waiter(180): Waiting up to [1,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 02:15:11,730 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-18 02:15:11,730 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 02:15:11,731 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldgroup 2023-07-18 02:15:11,731 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 02:15:11,732 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-18 02:15:11,732 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 02:15:11,734 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 02:15:11,735 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 02:15:11,736 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup normal 2023-07-18 02:15:11,738 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-18 02:15:11,739 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-18 02:15:11,741 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:11,743 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 02:15:11,743 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-18 02:15:11,747 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 02:15:11,750 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:11,751 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:11,753 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43645] to rsgroup normal 2023-07-18 02:15:11,755 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-18 02:15:11,755 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-18 02:15:11,755 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:11,755 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 02:15:11,756 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-18 02:15:11,757 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-18 02:15:11,757 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,43645,1689646493716] are moved back to default 2023-07-18 02:15:11,757 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(438): Move servers done: default => normal 2023-07-18 02:15:11,757 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 02:15:11,759 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:11,759 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:11,761 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=normal 2023-07-18 02:15:11,762 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 02:15:11,763 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'unmovedTable', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'ut', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 02:15:11,764 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] procedure2.ProcedureExecutor(1029): Stored pid=123, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=unmovedTable 2023-07-18 02:15:11,766 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=123, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 02:15:11,766 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "unmovedTable" procId is: 123 2023-07-18 02:15:11,767 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(1230): Checking to see if procedure is done pid=123 2023-07-18 02:15:11,768 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-18 02:15:11,768 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-18 02:15:11,769 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:11,769 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 02:15:11,769 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-18 02:15:11,771 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=123, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-18 02:15:11,773 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/unmovedTable/eb5efc21960221b704d272f83f5b2dec 2023-07-18 02:15:11,773 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/unmovedTable/eb5efc21960221b704d272f83f5b2dec empty. 2023-07-18 02:15:11,774 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/unmovedTable/eb5efc21960221b704d272f83f5b2dec 2023-07-18 02:15:11,774 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived unmovedTable regions 2023-07-18 02:15:11,788 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/unmovedTable/.tabledesc/.tableinfo.0000000001 2023-07-18 02:15:11,789 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(7675): creating {ENCODED => eb5efc21960221b704d272f83f5b2dec, NAME => 'unmovedTable,,1689646511763.eb5efc21960221b704d272f83f5b2dec.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='unmovedTable', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'ut', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp 2023-07-18 02:15:11,800 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689646511763.eb5efc21960221b704d272f83f5b2dec.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 02:15:11,800 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1604): Closing eb5efc21960221b704d272f83f5b2dec, disabling compactions & flushes 2023-07-18 02:15:11,800 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689646511763.eb5efc21960221b704d272f83f5b2dec. 2023-07-18 02:15:11,800 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689646511763.eb5efc21960221b704d272f83f5b2dec. 2023-07-18 02:15:11,800 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689646511763.eb5efc21960221b704d272f83f5b2dec. after waiting 0 ms 2023-07-18 02:15:11,800 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689646511763.eb5efc21960221b704d272f83f5b2dec. 2023-07-18 02:15:11,800 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1838): Closed unmovedTable,,1689646511763.eb5efc21960221b704d272f83f5b2dec. 2023-07-18 02:15:11,800 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1558): Region close journal for eb5efc21960221b704d272f83f5b2dec: 2023-07-18 02:15:11,802 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=123, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_ADD_TO_META 2023-07-18 02:15:11,803 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"unmovedTable,,1689646511763.eb5efc21960221b704d272f83f5b2dec.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689646511803"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689646511803"}]},"ts":"1689646511803"} 2023-07-18 02:15:11,805 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-18 02:15:11,805 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=123, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-18 02:15:11,805 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"unmovedTable","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689646511805"}]},"ts":"1689646511805"} 2023-07-18 02:15:11,807 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=unmovedTable, state=ENABLING in hbase:meta 2023-07-18 02:15:11,810 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=124, ppid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=unmovedTable, region=eb5efc21960221b704d272f83f5b2dec, ASSIGN}] 2023-07-18 02:15:11,812 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=124, ppid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=unmovedTable, region=eb5efc21960221b704d272f83f5b2dec, ASSIGN 2023-07-18 02:15:11,813 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=124, ppid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=eb5efc21960221b704d272f83f5b2dec, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,45077,1689646489555; forceNewPlan=false, retain=false 2023-07-18 02:15:11,868 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(1230): Checking to see if procedure is done pid=123 2023-07-18 02:15:11,965 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=124 updating hbase:meta row=eb5efc21960221b704d272f83f5b2dec, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45077,1689646489555 2023-07-18 02:15:11,965 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689646511763.eb5efc21960221b704d272f83f5b2dec.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689646511965"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689646511965"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689646511965"}]},"ts":"1689646511965"} 2023-07-18 02:15:11,966 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=125, ppid=124, state=RUNNABLE; OpenRegionProcedure eb5efc21960221b704d272f83f5b2dec, server=jenkins-hbase4.apache.org,45077,1689646489555}] 2023-07-18 02:15:12,069 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(1230): Checking to see if procedure is done pid=123 2023-07-18 02:15:12,121 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689646511763.eb5efc21960221b704d272f83f5b2dec. 2023-07-18 02:15:12,122 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => eb5efc21960221b704d272f83f5b2dec, NAME => 'unmovedTable,,1689646511763.eb5efc21960221b704d272f83f5b2dec.', STARTKEY => '', ENDKEY => ''} 2023-07-18 02:15:12,122 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable eb5efc21960221b704d272f83f5b2dec 2023-07-18 02:15:12,122 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689646511763.eb5efc21960221b704d272f83f5b2dec.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 02:15:12,122 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for eb5efc21960221b704d272f83f5b2dec 2023-07-18 02:15:12,122 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for eb5efc21960221b704d272f83f5b2dec 2023-07-18 02:15:12,123 INFO [StoreOpener-eb5efc21960221b704d272f83f5b2dec-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region eb5efc21960221b704d272f83f5b2dec 2023-07-18 02:15:12,125 DEBUG [StoreOpener-eb5efc21960221b704d272f83f5b2dec-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/unmovedTable/eb5efc21960221b704d272f83f5b2dec/ut 2023-07-18 02:15:12,125 DEBUG [StoreOpener-eb5efc21960221b704d272f83f5b2dec-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/unmovedTable/eb5efc21960221b704d272f83f5b2dec/ut 2023-07-18 02:15:12,125 INFO [StoreOpener-eb5efc21960221b704d272f83f5b2dec-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region eb5efc21960221b704d272f83f5b2dec columnFamilyName ut 2023-07-18 02:15:12,126 INFO [StoreOpener-eb5efc21960221b704d272f83f5b2dec-1] regionserver.HStore(310): Store=eb5efc21960221b704d272f83f5b2dec/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 02:15:12,127 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/unmovedTable/eb5efc21960221b704d272f83f5b2dec 2023-07-18 02:15:12,127 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/unmovedTable/eb5efc21960221b704d272f83f5b2dec 2023-07-18 02:15:12,130 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for eb5efc21960221b704d272f83f5b2dec 2023-07-18 02:15:12,132 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/unmovedTable/eb5efc21960221b704d272f83f5b2dec/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 02:15:12,132 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened eb5efc21960221b704d272f83f5b2dec; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11214368320, jitterRate=0.044419437646865845}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 02:15:12,133 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for eb5efc21960221b704d272f83f5b2dec: 2023-07-18 02:15:12,133 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689646511763.eb5efc21960221b704d272f83f5b2dec., pid=125, masterSystemTime=1689646512118 2023-07-18 02:15:12,134 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689646511763.eb5efc21960221b704d272f83f5b2dec. 2023-07-18 02:15:12,134 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689646511763.eb5efc21960221b704d272f83f5b2dec. 2023-07-18 02:15:12,135 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=124 updating hbase:meta row=eb5efc21960221b704d272f83f5b2dec, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,45077,1689646489555 2023-07-18 02:15:12,135 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689646511763.eb5efc21960221b704d272f83f5b2dec.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689646512135"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689646512135"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689646512135"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689646512135"}]},"ts":"1689646512135"} 2023-07-18 02:15:12,137 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=125, resume processing ppid=124 2023-07-18 02:15:12,138 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=125, ppid=124, state=SUCCESS; OpenRegionProcedure eb5efc21960221b704d272f83f5b2dec, server=jenkins-hbase4.apache.org,45077,1689646489555 in 170 msec 2023-07-18 02:15:12,139 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=124, resume processing ppid=123 2023-07-18 02:15:12,139 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=124, ppid=123, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=eb5efc21960221b704d272f83f5b2dec, ASSIGN in 328 msec 2023-07-18 02:15:12,140 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=123, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-18 02:15:12,140 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"unmovedTable","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689646512140"}]},"ts":"1689646512140"} 2023-07-18 02:15:12,141 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=unmovedTable, state=ENABLED in hbase:meta 2023-07-18 02:15:12,143 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=123, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_POST_OPERATION 2023-07-18 02:15:12,144 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=123, state=SUCCESS; CreateTableProcedure table=unmovedTable in 380 msec 2023-07-18 02:15:12,370 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(1230): Checking to see if procedure is done pid=123 2023-07-18 02:15:12,370 INFO [Listener at localhost/38101] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:unmovedTable, procId: 123 completed 2023-07-18 02:15:12,371 DEBUG [Listener at localhost/38101] hbase.HBaseTestingUtility(3430): Waiting until all regions of table unmovedTable get assigned. Timeout = 60000ms 2023-07-18 02:15:12,371 INFO [Listener at localhost/38101] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 02:15:12,374 INFO [Listener at localhost/38101] hbase.HBaseTestingUtility(3484): All regions for table unmovedTable assigned to meta. Checking AM states. 2023-07-18 02:15:12,375 INFO [Listener at localhost/38101] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 02:15:12,375 INFO [Listener at localhost/38101] hbase.HBaseTestingUtility(3504): All regions for table unmovedTable assigned. 2023-07-18 02:15:12,376 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [unmovedTable] to rsgroup normal 2023-07-18 02:15:12,379 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-18 02:15:12,379 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-18 02:15:12,379 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:12,380 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 02:15:12,380 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-18 02:15:12,381 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(339): Moving region(s) for table unmovedTable to RSGroup normal 2023-07-18 02:15:12,382 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(345): Moving region eb5efc21960221b704d272f83f5b2dec to RSGroup normal 2023-07-18 02:15:12,382 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] procedure2.ProcedureExecutor(1029): Stored pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=eb5efc21960221b704d272f83f5b2dec, REOPEN/MOVE 2023-07-18 02:15:12,382 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group normal, current retry=0 2023-07-18 02:15:12,382 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=eb5efc21960221b704d272f83f5b2dec, REOPEN/MOVE 2023-07-18 02:15:12,383 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=eb5efc21960221b704d272f83f5b2dec, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,45077,1689646489555 2023-07-18 02:15:12,383 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689646511763.eb5efc21960221b704d272f83f5b2dec.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689646512383"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689646512383"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689646512383"}]},"ts":"1689646512383"} 2023-07-18 02:15:12,384 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=127, ppid=126, state=RUNNABLE; CloseRegionProcedure eb5efc21960221b704d272f83f5b2dec, server=jenkins-hbase4.apache.org,45077,1689646489555}] 2023-07-18 02:15:12,537 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close eb5efc21960221b704d272f83f5b2dec 2023-07-18 02:15:12,538 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing eb5efc21960221b704d272f83f5b2dec, disabling compactions & flushes 2023-07-18 02:15:12,539 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689646511763.eb5efc21960221b704d272f83f5b2dec. 2023-07-18 02:15:12,539 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689646511763.eb5efc21960221b704d272f83f5b2dec. 2023-07-18 02:15:12,539 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689646511763.eb5efc21960221b704d272f83f5b2dec. after waiting 0 ms 2023-07-18 02:15:12,539 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689646511763.eb5efc21960221b704d272f83f5b2dec. 2023-07-18 02:15:12,542 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/unmovedTable/eb5efc21960221b704d272f83f5b2dec/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 02:15:12,543 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689646511763.eb5efc21960221b704d272f83f5b2dec. 2023-07-18 02:15:12,543 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for eb5efc21960221b704d272f83f5b2dec: 2023-07-18 02:15:12,543 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding eb5efc21960221b704d272f83f5b2dec move to jenkins-hbase4.apache.org,43645,1689646493716 record at close sequenceid=2 2023-07-18 02:15:12,544 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed eb5efc21960221b704d272f83f5b2dec 2023-07-18 02:15:12,545 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=eb5efc21960221b704d272f83f5b2dec, regionState=CLOSED 2023-07-18 02:15:12,545 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"unmovedTable,,1689646511763.eb5efc21960221b704d272f83f5b2dec.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689646512545"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689646512545"}]},"ts":"1689646512545"} 2023-07-18 02:15:12,548 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=127, resume processing ppid=126 2023-07-18 02:15:12,548 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=127, ppid=126, state=SUCCESS; CloseRegionProcedure eb5efc21960221b704d272f83f5b2dec, server=jenkins-hbase4.apache.org,45077,1689646489555 in 162 msec 2023-07-18 02:15:12,548 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=eb5efc21960221b704d272f83f5b2dec, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,43645,1689646493716; forceNewPlan=false, retain=false 2023-07-18 02:15:12,699 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=eb5efc21960221b704d272f83f5b2dec, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43645,1689646493716 2023-07-18 02:15:12,699 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689646511763.eb5efc21960221b704d272f83f5b2dec.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689646512699"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689646512699"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689646512699"}]},"ts":"1689646512699"} 2023-07-18 02:15:12,701 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=128, ppid=126, state=RUNNABLE; OpenRegionProcedure eb5efc21960221b704d272f83f5b2dec, server=jenkins-hbase4.apache.org,43645,1689646493716}] 2023-07-18 02:15:12,856 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689646511763.eb5efc21960221b704d272f83f5b2dec. 2023-07-18 02:15:12,856 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => eb5efc21960221b704d272f83f5b2dec, NAME => 'unmovedTable,,1689646511763.eb5efc21960221b704d272f83f5b2dec.', STARTKEY => '', ENDKEY => ''} 2023-07-18 02:15:12,856 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable eb5efc21960221b704d272f83f5b2dec 2023-07-18 02:15:12,856 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689646511763.eb5efc21960221b704d272f83f5b2dec.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 02:15:12,856 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for eb5efc21960221b704d272f83f5b2dec 2023-07-18 02:15:12,856 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for eb5efc21960221b704d272f83f5b2dec 2023-07-18 02:15:12,857 INFO [StoreOpener-eb5efc21960221b704d272f83f5b2dec-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region eb5efc21960221b704d272f83f5b2dec 2023-07-18 02:15:12,858 DEBUG [StoreOpener-eb5efc21960221b704d272f83f5b2dec-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/unmovedTable/eb5efc21960221b704d272f83f5b2dec/ut 2023-07-18 02:15:12,859 DEBUG [StoreOpener-eb5efc21960221b704d272f83f5b2dec-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/unmovedTable/eb5efc21960221b704d272f83f5b2dec/ut 2023-07-18 02:15:12,859 INFO [StoreOpener-eb5efc21960221b704d272f83f5b2dec-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region eb5efc21960221b704d272f83f5b2dec columnFamilyName ut 2023-07-18 02:15:12,859 INFO [StoreOpener-eb5efc21960221b704d272f83f5b2dec-1] regionserver.HStore(310): Store=eb5efc21960221b704d272f83f5b2dec/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 02:15:12,860 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/unmovedTable/eb5efc21960221b704d272f83f5b2dec 2023-07-18 02:15:12,861 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/unmovedTable/eb5efc21960221b704d272f83f5b2dec 2023-07-18 02:15:12,864 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for eb5efc21960221b704d272f83f5b2dec 2023-07-18 02:15:12,865 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened eb5efc21960221b704d272f83f5b2dec; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9403274720, jitterRate=-0.12425179779529572}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 02:15:12,865 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for eb5efc21960221b704d272f83f5b2dec: 2023-07-18 02:15:12,865 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689646511763.eb5efc21960221b704d272f83f5b2dec., pid=128, masterSystemTime=1689646512852 2023-07-18 02:15:12,867 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689646511763.eb5efc21960221b704d272f83f5b2dec. 2023-07-18 02:15:12,867 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689646511763.eb5efc21960221b704d272f83f5b2dec. 2023-07-18 02:15:12,867 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=eb5efc21960221b704d272f83f5b2dec, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,43645,1689646493716 2023-07-18 02:15:12,867 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689646511763.eb5efc21960221b704d272f83f5b2dec.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689646512867"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689646512867"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689646512867"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689646512867"}]},"ts":"1689646512867"} 2023-07-18 02:15:12,870 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=128, resume processing ppid=126 2023-07-18 02:15:12,870 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=128, ppid=126, state=SUCCESS; OpenRegionProcedure eb5efc21960221b704d272f83f5b2dec, server=jenkins-hbase4.apache.org,43645,1689646493716 in 167 msec 2023-07-18 02:15:12,871 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=126, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=eb5efc21960221b704d272f83f5b2dec, REOPEN/MOVE in 488 msec 2023-07-18 02:15:13,382 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] procedure.ProcedureSyncWait(216): waitFor pid=126 2023-07-18 02:15:13,383 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(369): All regions from table(s) [unmovedTable] moved to target group normal. 2023-07-18 02:15:13,383 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 02:15:13,386 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:13,386 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:13,388 INFO [Listener at localhost/38101] hbase.Waiter(180): Waiting up to [1,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 02:15:13,389 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-18 02:15:13,389 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 02:15:13,390 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=normal 2023-07-18 02:15:13,390 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 02:15:13,391 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-18 02:15:13,391 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 02:15:13,392 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldgroup to newgroup 2023-07-18 02:15:13,394 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-18 02:15:13,394 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:13,394 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 02:15:13,395 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-18 02:15:13,397 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 9 2023-07-18 02:15:13,398 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RenameRSGroup 2023-07-18 02:15:13,401 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:13,401 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:13,403 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=newgroup 2023-07-18 02:15:13,404 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 02:15:13,404 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-18 02:15:13,404 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 02:15:13,405 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-18 02:15:13,405 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 02:15:13,409 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:13,409 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:13,410 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [unmovedTable] to rsgroup default 2023-07-18 02:15:13,412 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-18 02:15:13,413 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:13,413 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 02:15:13,413 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-18 02:15:13,413 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-18 02:15:13,418 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(339): Moving region(s) for table unmovedTable to RSGroup default 2023-07-18 02:15:13,418 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(345): Moving region eb5efc21960221b704d272f83f5b2dec to RSGroup default 2023-07-18 02:15:13,419 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] procedure2.ProcedureExecutor(1029): Stored pid=129, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=eb5efc21960221b704d272f83f5b2dec, REOPEN/MOVE 2023-07-18 02:15:13,419 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-18 02:15:13,419 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=129, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=eb5efc21960221b704d272f83f5b2dec, REOPEN/MOVE 2023-07-18 02:15:13,419 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=eb5efc21960221b704d272f83f5b2dec, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43645,1689646493716 2023-07-18 02:15:13,420 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689646511763.eb5efc21960221b704d272f83f5b2dec.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689646513419"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689646513419"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689646513419"}]},"ts":"1689646513419"} 2023-07-18 02:15:13,421 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=130, ppid=129, state=RUNNABLE; CloseRegionProcedure eb5efc21960221b704d272f83f5b2dec, server=jenkins-hbase4.apache.org,43645,1689646493716}] 2023-07-18 02:15:13,574 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close eb5efc21960221b704d272f83f5b2dec 2023-07-18 02:15:13,575 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing eb5efc21960221b704d272f83f5b2dec, disabling compactions & flushes 2023-07-18 02:15:13,575 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689646511763.eb5efc21960221b704d272f83f5b2dec. 2023-07-18 02:15:13,575 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689646511763.eb5efc21960221b704d272f83f5b2dec. 2023-07-18 02:15:13,575 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689646511763.eb5efc21960221b704d272f83f5b2dec. after waiting 0 ms 2023-07-18 02:15:13,575 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689646511763.eb5efc21960221b704d272f83f5b2dec. 2023-07-18 02:15:13,579 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/unmovedTable/eb5efc21960221b704d272f83f5b2dec/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-18 02:15:13,580 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689646511763.eb5efc21960221b704d272f83f5b2dec. 2023-07-18 02:15:13,580 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for eb5efc21960221b704d272f83f5b2dec: 2023-07-18 02:15:13,580 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding eb5efc21960221b704d272f83f5b2dec move to jenkins-hbase4.apache.org,45077,1689646489555 record at close sequenceid=5 2023-07-18 02:15:13,581 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed eb5efc21960221b704d272f83f5b2dec 2023-07-18 02:15:13,581 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=eb5efc21960221b704d272f83f5b2dec, regionState=CLOSED 2023-07-18 02:15:13,582 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"unmovedTable,,1689646511763.eb5efc21960221b704d272f83f5b2dec.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689646513581"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689646513581"}]},"ts":"1689646513581"} 2023-07-18 02:15:13,584 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=130, resume processing ppid=129 2023-07-18 02:15:13,584 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=130, ppid=129, state=SUCCESS; CloseRegionProcedure eb5efc21960221b704d272f83f5b2dec, server=jenkins-hbase4.apache.org,43645,1689646493716 in 162 msec 2023-07-18 02:15:13,585 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=eb5efc21960221b704d272f83f5b2dec, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,45077,1689646489555; forceNewPlan=false, retain=false 2023-07-18 02:15:13,735 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=eb5efc21960221b704d272f83f5b2dec, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45077,1689646489555 2023-07-18 02:15:13,735 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689646511763.eb5efc21960221b704d272f83f5b2dec.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689646513735"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689646513735"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689646513735"}]},"ts":"1689646513735"} 2023-07-18 02:15:13,737 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=131, ppid=129, state=RUNNABLE; OpenRegionProcedure eb5efc21960221b704d272f83f5b2dec, server=jenkins-hbase4.apache.org,45077,1689646489555}] 2023-07-18 02:15:13,893 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689646511763.eb5efc21960221b704d272f83f5b2dec. 2023-07-18 02:15:13,893 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => eb5efc21960221b704d272f83f5b2dec, NAME => 'unmovedTable,,1689646511763.eb5efc21960221b704d272f83f5b2dec.', STARTKEY => '', ENDKEY => ''} 2023-07-18 02:15:13,893 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable eb5efc21960221b704d272f83f5b2dec 2023-07-18 02:15:13,893 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689646511763.eb5efc21960221b704d272f83f5b2dec.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 02:15:13,893 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for eb5efc21960221b704d272f83f5b2dec 2023-07-18 02:15:13,893 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for eb5efc21960221b704d272f83f5b2dec 2023-07-18 02:15:13,894 INFO [StoreOpener-eb5efc21960221b704d272f83f5b2dec-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region eb5efc21960221b704d272f83f5b2dec 2023-07-18 02:15:13,895 DEBUG [StoreOpener-eb5efc21960221b704d272f83f5b2dec-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/unmovedTable/eb5efc21960221b704d272f83f5b2dec/ut 2023-07-18 02:15:13,895 DEBUG [StoreOpener-eb5efc21960221b704d272f83f5b2dec-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/unmovedTable/eb5efc21960221b704d272f83f5b2dec/ut 2023-07-18 02:15:13,896 INFO [StoreOpener-eb5efc21960221b704d272f83f5b2dec-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region eb5efc21960221b704d272f83f5b2dec columnFamilyName ut 2023-07-18 02:15:13,896 INFO [StoreOpener-eb5efc21960221b704d272f83f5b2dec-1] regionserver.HStore(310): Store=eb5efc21960221b704d272f83f5b2dec/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 02:15:13,897 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/unmovedTable/eb5efc21960221b704d272f83f5b2dec 2023-07-18 02:15:13,898 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/unmovedTable/eb5efc21960221b704d272f83f5b2dec 2023-07-18 02:15:13,901 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for eb5efc21960221b704d272f83f5b2dec 2023-07-18 02:15:13,902 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened eb5efc21960221b704d272f83f5b2dec; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9690265600, jitterRate=-0.09752368927001953}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 02:15:13,902 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for eb5efc21960221b704d272f83f5b2dec: 2023-07-18 02:15:13,903 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689646511763.eb5efc21960221b704d272f83f5b2dec., pid=131, masterSystemTime=1689646513889 2023-07-18 02:15:13,904 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689646511763.eb5efc21960221b704d272f83f5b2dec. 2023-07-18 02:15:13,904 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689646511763.eb5efc21960221b704d272f83f5b2dec. 2023-07-18 02:15:13,904 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=eb5efc21960221b704d272f83f5b2dec, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,45077,1689646489555 2023-07-18 02:15:13,905 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689646511763.eb5efc21960221b704d272f83f5b2dec.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689646513904"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689646513904"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689646513904"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689646513904"}]},"ts":"1689646513904"} 2023-07-18 02:15:13,907 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=131, resume processing ppid=129 2023-07-18 02:15:13,907 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=131, ppid=129, state=SUCCESS; OpenRegionProcedure eb5efc21960221b704d272f83f5b2dec, server=jenkins-hbase4.apache.org,45077,1689646489555 in 169 msec 2023-07-18 02:15:13,908 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=129, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=eb5efc21960221b704d272f83f5b2dec, REOPEN/MOVE in 489 msec 2023-07-18 02:15:13,912 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-18 02:15:14,419 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] procedure.ProcedureSyncWait(216): waitFor pid=129 2023-07-18 02:15:14,419 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(369): All regions from table(s) [unmovedTable] moved to target group default. 2023-07-18 02:15:14,419 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 02:15:14,421 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43645] to rsgroup default 2023-07-18 02:15:14,423 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-18 02:15:14,423 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:14,424 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 02:15:14,424 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-18 02:15:14,424 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-18 02:15:14,426 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group normal, current retry=0 2023-07-18 02:15:14,426 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,43645,1689646493716] are moved back to normal 2023-07-18 02:15:14,426 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(438): Move servers done: normal => default 2023-07-18 02:15:14,426 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 02:15:14,427 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup normal 2023-07-18 02:15:14,430 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:14,431 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 02:15:14,431 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-18 02:15:14,431 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-18 02:15:14,433 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 02:15:14,433 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 02:15:14,433 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 02:15:14,434 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 02:15:14,434 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 02:15:14,434 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 02:15:14,435 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 02:15:14,438 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:14,438 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-18 02:15:14,439 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-18 02:15:14,440 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 02:15:14,442 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [testRename] to rsgroup default 2023-07-18 02:15:14,443 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:14,444 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-18 02:15:14,444 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 02:15:14,446 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(339): Moving region(s) for table testRename to RSGroup default 2023-07-18 02:15:14,446 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(345): Moving region bbf71cfacd6e4740d14aa9af8f240c8d to RSGroup default 2023-07-18 02:15:14,447 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] procedure2.ProcedureExecutor(1029): Stored pid=132, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=bbf71cfacd6e4740d14aa9af8f240c8d, REOPEN/MOVE 2023-07-18 02:15:14,447 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-18 02:15:14,447 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=132, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=bbf71cfacd6e4740d14aa9af8f240c8d, REOPEN/MOVE 2023-07-18 02:15:14,452 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=132 updating hbase:meta row=bbf71cfacd6e4740d14aa9af8f240c8d, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,35063,1689646489808 2023-07-18 02:15:14,452 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689646510101.bbf71cfacd6e4740d14aa9af8f240c8d.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689646514452"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689646514452"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689646514452"}]},"ts":"1689646514452"} 2023-07-18 02:15:14,453 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=133, ppid=132, state=RUNNABLE; CloseRegionProcedure bbf71cfacd6e4740d14aa9af8f240c8d, server=jenkins-hbase4.apache.org,35063,1689646489808}] 2023-07-18 02:15:14,606 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close bbf71cfacd6e4740d14aa9af8f240c8d 2023-07-18 02:15:14,607 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing bbf71cfacd6e4740d14aa9af8f240c8d, disabling compactions & flushes 2023-07-18 02:15:14,607 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1689646510101.bbf71cfacd6e4740d14aa9af8f240c8d. 2023-07-18 02:15:14,607 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689646510101.bbf71cfacd6e4740d14aa9af8f240c8d. 2023-07-18 02:15:14,608 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689646510101.bbf71cfacd6e4740d14aa9af8f240c8d. after waiting 0 ms 2023-07-18 02:15:14,608 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689646510101.bbf71cfacd6e4740d14aa9af8f240c8d. 2023-07-18 02:15:14,611 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/testRename/bbf71cfacd6e4740d14aa9af8f240c8d/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-18 02:15:14,613 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1689646510101.bbf71cfacd6e4740d14aa9af8f240c8d. 2023-07-18 02:15:14,613 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for bbf71cfacd6e4740d14aa9af8f240c8d: 2023-07-18 02:15:14,613 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding bbf71cfacd6e4740d14aa9af8f240c8d move to jenkins-hbase4.apache.org,43645,1689646493716 record at close sequenceid=5 2023-07-18 02:15:14,614 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed bbf71cfacd6e4740d14aa9af8f240c8d 2023-07-18 02:15:14,615 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=132 updating hbase:meta row=bbf71cfacd6e4740d14aa9af8f240c8d, regionState=CLOSED 2023-07-18 02:15:14,615 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"testRename,,1689646510101.bbf71cfacd6e4740d14aa9af8f240c8d.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689646514615"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689646514615"}]},"ts":"1689646514615"} 2023-07-18 02:15:14,618 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=133, resume processing ppid=132 2023-07-18 02:15:14,618 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=133, ppid=132, state=SUCCESS; CloseRegionProcedure bbf71cfacd6e4740d14aa9af8f240c8d, server=jenkins-hbase4.apache.org,35063,1689646489808 in 163 msec 2023-07-18 02:15:14,618 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=bbf71cfacd6e4740d14aa9af8f240c8d, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,43645,1689646493716; forceNewPlan=false, retain=false 2023-07-18 02:15:14,768 INFO [jenkins-hbase4:40909] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-18 02:15:14,769 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=132 updating hbase:meta row=bbf71cfacd6e4740d14aa9af8f240c8d, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43645,1689646493716 2023-07-18 02:15:14,769 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689646510101.bbf71cfacd6e4740d14aa9af8f240c8d.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689646514769"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689646514769"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689646514769"}]},"ts":"1689646514769"} 2023-07-18 02:15:14,770 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=134, ppid=132, state=RUNNABLE; OpenRegionProcedure bbf71cfacd6e4740d14aa9af8f240c8d, server=jenkins-hbase4.apache.org,43645,1689646493716}] 2023-07-18 02:15:14,926 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1689646510101.bbf71cfacd6e4740d14aa9af8f240c8d. 2023-07-18 02:15:14,926 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => bbf71cfacd6e4740d14aa9af8f240c8d, NAME => 'testRename,,1689646510101.bbf71cfacd6e4740d14aa9af8f240c8d.', STARTKEY => '', ENDKEY => ''} 2023-07-18 02:15:14,927 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename bbf71cfacd6e4740d14aa9af8f240c8d 2023-07-18 02:15:14,927 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1689646510101.bbf71cfacd6e4740d14aa9af8f240c8d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 02:15:14,927 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for bbf71cfacd6e4740d14aa9af8f240c8d 2023-07-18 02:15:14,927 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for bbf71cfacd6e4740d14aa9af8f240c8d 2023-07-18 02:15:14,928 INFO [StoreOpener-bbf71cfacd6e4740d14aa9af8f240c8d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region bbf71cfacd6e4740d14aa9af8f240c8d 2023-07-18 02:15:14,930 DEBUG [StoreOpener-bbf71cfacd6e4740d14aa9af8f240c8d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/testRename/bbf71cfacd6e4740d14aa9af8f240c8d/tr 2023-07-18 02:15:14,930 DEBUG [StoreOpener-bbf71cfacd6e4740d14aa9af8f240c8d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/testRename/bbf71cfacd6e4740d14aa9af8f240c8d/tr 2023-07-18 02:15:14,930 INFO [StoreOpener-bbf71cfacd6e4740d14aa9af8f240c8d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region bbf71cfacd6e4740d14aa9af8f240c8d columnFamilyName tr 2023-07-18 02:15:14,931 INFO [StoreOpener-bbf71cfacd6e4740d14aa9af8f240c8d-1] regionserver.HStore(310): Store=bbf71cfacd6e4740d14aa9af8f240c8d/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 02:15:14,932 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/testRename/bbf71cfacd6e4740d14aa9af8f240c8d 2023-07-18 02:15:14,934 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/testRename/bbf71cfacd6e4740d14aa9af8f240c8d 2023-07-18 02:15:14,937 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for bbf71cfacd6e4740d14aa9af8f240c8d 2023-07-18 02:15:14,938 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened bbf71cfacd6e4740d14aa9af8f240c8d; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=12075993920, jitterRate=0.12466457486152649}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 02:15:14,938 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for bbf71cfacd6e4740d14aa9af8f240c8d: 2023-07-18 02:15:14,939 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689646510101.bbf71cfacd6e4740d14aa9af8f240c8d., pid=134, masterSystemTime=1689646514922 2023-07-18 02:15:14,940 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689646510101.bbf71cfacd6e4740d14aa9af8f240c8d. 2023-07-18 02:15:14,940 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689646510101.bbf71cfacd6e4740d14aa9af8f240c8d. 2023-07-18 02:15:14,940 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=132 updating hbase:meta row=bbf71cfacd6e4740d14aa9af8f240c8d, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,43645,1689646493716 2023-07-18 02:15:14,941 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689646510101.bbf71cfacd6e4740d14aa9af8f240c8d.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689646514940"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689646514940"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689646514940"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689646514940"}]},"ts":"1689646514940"} 2023-07-18 02:15:14,943 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=134, resume processing ppid=132 2023-07-18 02:15:14,943 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=134, ppid=132, state=SUCCESS; OpenRegionProcedure bbf71cfacd6e4740d14aa9af8f240c8d, server=jenkins-hbase4.apache.org,43645,1689646493716 in 172 msec 2023-07-18 02:15:14,944 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=132, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=bbf71cfacd6e4740d14aa9af8f240c8d, REOPEN/MOVE in 497 msec 2023-07-18 02:15:15,447 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] procedure.ProcedureSyncWait(216): waitFor pid=132 2023-07-18 02:15:15,447 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(369): All regions from table(s) [testRename] moved to target group default. 2023-07-18 02:15:15,448 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 02:15:15,449 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:35063, jenkins-hbase4.apache.org:39557] to rsgroup default 2023-07-18 02:15:15,451 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:15,452 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-18 02:15:15,452 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 02:15:15,454 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group newgroup, current retry=0 2023-07-18 02:15:15,454 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,35063,1689646489808, jenkins-hbase4.apache.org,39557,1689646489998] are moved back to newgroup 2023-07-18 02:15:15,454 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(438): Move servers done: newgroup => default 2023-07-18 02:15:15,454 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 02:15:15,455 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup newgroup 2023-07-18 02:15:15,459 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:15,460 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 02:15:15,464 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 02:15:15,467 INFO [Listener at localhost/38101] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 02:15:15,468 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 02:15:15,470 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:15,471 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 02:15:15,474 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 02:15:15,475 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 02:15:15,478 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:15,479 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:15,481 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40909] to rsgroup master 2023-07-18 02:15:15,481 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40909 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 02:15:15,481 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] ipc.CallRunner(144): callId: 761 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:39122 deadline: 1689647715481, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40909 is either offline or it does not exist. 2023-07-18 02:15:15,481 WARN [Listener at localhost/38101] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40909 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40909 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 02:15:15,483 INFO [Listener at localhost/38101] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 02:15:15,484 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:15,484 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:15,484 INFO [Listener at localhost/38101] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35063, jenkins-hbase4.apache.org:39557, jenkins-hbase4.apache.org:43645, jenkins-hbase4.apache.org:45077], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 02:15:15,485 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 02:15:15,485 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 02:15:15,505 INFO [Listener at localhost/38101] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRenameRSGroup Thread=518 (was 523), OpenFileDescriptor=801 (was 813), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=420 (was 447), ProcessCount=170 (was 170), AvailableMemoryMB=4666 (was 4678) 2023-07-18 02:15:15,505 WARN [Listener at localhost/38101] hbase.ResourceChecker(130): Thread=518 is superior to 500 2023-07-18 02:15:15,526 INFO [Listener at localhost/38101] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testBogusArgs Thread=518, OpenFileDescriptor=801, MaxFileDescriptor=60000, SystemLoadAverage=420, ProcessCount=170, AvailableMemoryMB=4665 2023-07-18 02:15:15,526 WARN [Listener at localhost/38101] hbase.ResourceChecker(130): Thread=518 is superior to 500 2023-07-18 02:15:15,526 INFO [Listener at localhost/38101] rsgroup.TestRSGroupsBase(132): testBogusArgs 2023-07-18 02:15:15,530 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:15,530 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:15,531 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 02:15:15,531 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 02:15:15,531 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 02:15:15,532 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 02:15:15,532 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 02:15:15,533 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 02:15:15,536 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:15,536 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 02:15:15,538 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 02:15:15,540 INFO [Listener at localhost/38101] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 02:15:15,541 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 02:15:15,542 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:15,543 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 02:15:15,544 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 02:15:15,546 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 02:15:15,549 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:15,549 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:15,551 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40909] to rsgroup master 2023-07-18 02:15:15,551 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40909 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 02:15:15,551 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] ipc.CallRunner(144): callId: 789 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:39122 deadline: 1689647715551, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40909 is either offline or it does not exist. 2023-07-18 02:15:15,551 WARN [Listener at localhost/38101] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40909 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40909 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 02:15:15,553 INFO [Listener at localhost/38101] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 02:15:15,554 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:15,554 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:15,554 INFO [Listener at localhost/38101] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35063, jenkins-hbase4.apache.org:39557, jenkins-hbase4.apache.org:43645, jenkins-hbase4.apache.org:45077], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 02:15:15,555 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 02:15:15,555 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 02:15:15,555 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=nonexistent 2023-07-18 02:15:15,555 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 02:15:15,561 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(334): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, server=bogus:123 2023-07-18 02:15:15,561 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfServer 2023-07-18 02:15:15,562 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bogus 2023-07-18 02:15:15,562 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 02:15:15,562 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bogus 2023-07-18 02:15:15,563 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bogus does not exist at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:486) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 02:15:15,563 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] ipc.CallRunner(144): callId: 801 service: MasterService methodName: ExecMasterService size: 87 connection: 172.31.14.131:39122 deadline: 1689647715562, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bogus does not exist 2023-07-18 02:15:15,565 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [bogus:123] to rsgroup bogus 2023-07-18 02:15:15,565 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.getAndCheckRSGroupInfo(RSGroupAdminServer.java:115) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:398) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 02:15:15,565 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] ipc.CallRunner(144): callId: 804 service: MasterService methodName: ExecMasterService size: 96 connection: 172.31.14.131:39122 deadline: 1689647715565, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus 2023-07-18 02:15:15,567 DEBUG [Listener at localhost/38101-EventThread] zookeeper.ZKWatcher(600): master:40909-0x1017635d76e0000, quorum=127.0.0.1:54439, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/balancer 2023-07-18 02:15:15,567 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=true 2023-07-18 02:15:15,572 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(292): Client=jenkins//172.31.14.131 balance rsgroup, group=bogus 2023-07-18 02:15:15,572 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.balanceRSGroup(RSGroupAdminServer.java:523) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.balanceRSGroup(RSGroupAdminEndpoint.java:299) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16213) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 02:15:15,572 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] ipc.CallRunner(144): callId: 808 service: MasterService methodName: ExecMasterService size: 88 connection: 172.31.14.131:39122 deadline: 1689647715571, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus 2023-07-18 02:15:15,575 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:15,575 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:15,576 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 02:15:15,576 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 02:15:15,576 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 02:15:15,577 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 02:15:15,577 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 02:15:15,577 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 02:15:15,580 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:15,580 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 02:15:15,582 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 02:15:15,584 INFO [Listener at localhost/38101] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 02:15:15,584 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 02:15:15,586 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:15,586 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 02:15:15,587 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 02:15:15,589 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 02:15:15,592 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:15,592 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:15,593 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40909] to rsgroup master 2023-07-18 02:15:15,596 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40909 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 02:15:15,596 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] ipc.CallRunner(144): callId: 832 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:39122 deadline: 1689647715593, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40909 is either offline or it does not exist. 2023-07-18 02:15:15,596 WARN [Listener at localhost/38101] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40909 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40909 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 02:15:15,598 INFO [Listener at localhost/38101] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 02:15:15,598 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:15,598 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:15,599 INFO [Listener at localhost/38101] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35063, jenkins-hbase4.apache.org:39557, jenkins-hbase4.apache.org:43645, jenkins-hbase4.apache.org:45077], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 02:15:15,599 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 02:15:15,599 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 02:15:15,616 INFO [Listener at localhost/38101] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testBogusArgs Thread=522 (was 518) Potentially hanging thread: hconnection-0x7c96b44e-shared-pool-26 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2e79eb29-shared-pool-25 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x7c96b44e-shared-pool-25 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2e79eb29-shared-pool-24 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=801 (was 801), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=420 (was 420), ProcessCount=170 (was 170), AvailableMemoryMB=4664 (was 4665) 2023-07-18 02:15:15,616 WARN [Listener at localhost/38101] hbase.ResourceChecker(130): Thread=522 is superior to 500 2023-07-18 02:15:15,632 INFO [Listener at localhost/38101] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testDisabledTableMove Thread=522, OpenFileDescriptor=801, MaxFileDescriptor=60000, SystemLoadAverage=420, ProcessCount=170, AvailableMemoryMB=4664 2023-07-18 02:15:15,632 WARN [Listener at localhost/38101] hbase.ResourceChecker(130): Thread=522 is superior to 500 2023-07-18 02:15:15,632 INFO [Listener at localhost/38101] rsgroup.TestRSGroupsBase(132): testDisabledTableMove 2023-07-18 02:15:15,635 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:15,636 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:15,636 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 02:15:15,636 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 02:15:15,636 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 02:15:15,637 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 02:15:15,637 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 02:15:15,638 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 02:15:15,641 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:15,641 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 02:15:15,642 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 02:15:15,645 INFO [Listener at localhost/38101] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 02:15:15,645 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 02:15:15,647 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:15,647 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 02:15:15,648 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 02:15:15,653 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 02:15:15,655 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:15,655 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:15,657 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40909] to rsgroup master 2023-07-18 02:15:15,657 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40909 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 02:15:15,657 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] ipc.CallRunner(144): callId: 860 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:39122 deadline: 1689647715657, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40909 is either offline or it does not exist. 2023-07-18 02:15:15,658 WARN [Listener at localhost/38101] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40909 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40909 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 02:15:15,659 INFO [Listener at localhost/38101] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 02:15:15,660 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:15,660 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:15,660 INFO [Listener at localhost/38101] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35063, jenkins-hbase4.apache.org:39557, jenkins-hbase4.apache.org:43645, jenkins-hbase4.apache.org:45077], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 02:15:15,661 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 02:15:15,661 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 02:15:15,662 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 02:15:15,662 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 02:15:15,662 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testDisabledTableMove_93749856 2023-07-18 02:15:15,664 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_93749856 2023-07-18 02:15:15,666 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:15,666 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 02:15:15,666 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 02:15:15,667 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 02:15:15,669 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:15,669 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:15,671 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:35063, jenkins-hbase4.apache.org:39557] to rsgroup Group_testDisabledTableMove_93749856 2023-07-18 02:15:15,673 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:15,673 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_93749856 2023-07-18 02:15:15,674 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 02:15:15,674 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 02:15:15,675 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-18 02:15:15,675 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,35063,1689646489808, jenkins-hbase4.apache.org,39557,1689646489998] are moved back to default 2023-07-18 02:15:15,675 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testDisabledTableMove_93749856 2023-07-18 02:15:15,675 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 02:15:15,677 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:15,677 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:15,679 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testDisabledTableMove_93749856 2023-07-18 02:15:15,679 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 02:15:15,681 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 02:15:15,682 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] procedure2.ProcedureExecutor(1029): Stored pid=135, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testDisabledTableMove 2023-07-18 02:15:15,683 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=135, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 02:15:15,684 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testDisabledTableMove" procId is: 135 2023-07-18 02:15:15,684 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(1230): Checking to see if procedure is done pid=135 2023-07-18 02:15:15,685 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:15,685 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_93749856 2023-07-18 02:15:15,686 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 02:15:15,686 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 02:15:15,689 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=135, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-18 02:15:15,693 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testDisabledTableMove/701a99e0c263b7c9b528d737e988902e 2023-07-18 02:15:15,693 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testDisabledTableMove/2f28a773589111cb78b99d6b81a6c044 2023-07-18 02:15:15,693 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testDisabledTableMove/67a6a461ca0a8fed3574ea2c030cb75e 2023-07-18 02:15:15,693 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testDisabledTableMove/779ca4a243943953951087355d31d5bb 2023-07-18 02:15:15,693 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testDisabledTableMove/aef973101e2f6499975300e7250ac2ee 2023-07-18 02:15:15,694 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testDisabledTableMove/701a99e0c263b7c9b528d737e988902e empty. 2023-07-18 02:15:15,694 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testDisabledTableMove/67a6a461ca0a8fed3574ea2c030cb75e empty. 2023-07-18 02:15:15,694 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testDisabledTableMove/aef973101e2f6499975300e7250ac2ee empty. 2023-07-18 02:15:15,694 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testDisabledTableMove/779ca4a243943953951087355d31d5bb empty. 2023-07-18 02:15:15,694 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testDisabledTableMove/2f28a773589111cb78b99d6b81a6c044 empty. 2023-07-18 02:15:15,694 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testDisabledTableMove/701a99e0c263b7c9b528d737e988902e 2023-07-18 02:15:15,694 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testDisabledTableMove/67a6a461ca0a8fed3574ea2c030cb75e 2023-07-18 02:15:15,694 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testDisabledTableMove/aef973101e2f6499975300e7250ac2ee 2023-07-18 02:15:15,694 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testDisabledTableMove/2f28a773589111cb78b99d6b81a6c044 2023-07-18 02:15:15,694 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testDisabledTableMove/779ca4a243943953951087355d31d5bb 2023-07-18 02:15:15,694 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived Group_testDisabledTableMove regions 2023-07-18 02:15:15,712 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testDisabledTableMove/.tabledesc/.tableinfo.0000000001 2023-07-18 02:15:15,713 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(7675): creating {ENCODED => aef973101e2f6499975300e7250ac2ee, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689646515681.aef973101e2f6499975300e7250ac2ee.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp 2023-07-18 02:15:15,713 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(7675): creating {ENCODED => 701a99e0c263b7c9b528d737e988902e, NAME => 'Group_testDisabledTableMove,,1689646515681.701a99e0c263b7c9b528d737e988902e.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp 2023-07-18 02:15:15,713 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(7675): creating {ENCODED => 779ca4a243943953951087355d31d5bb, NAME => 'Group_testDisabledTableMove,aaaaa,1689646515681.779ca4a243943953951087355d31d5bb.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp 2023-07-18 02:15:15,754 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,,1689646515681.701a99e0c263b7c9b528d737e988902e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 02:15:15,755 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,i\xBF\x14i\xBE,1689646515681.aef973101e2f6499975300e7250ac2ee.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 02:15:15,755 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1604): Closing 701a99e0c263b7c9b528d737e988902e, disabling compactions & flushes 2023-07-18 02:15:15,755 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1604): Closing aef973101e2f6499975300e7250ac2ee, disabling compactions & flushes 2023-07-18 02:15:15,755 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,,1689646515681.701a99e0c263b7c9b528d737e988902e. 2023-07-18 02:15:15,755 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689646515681.aef973101e2f6499975300e7250ac2ee. 2023-07-18 02:15:15,755 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689646515681.aef973101e2f6499975300e7250ac2ee. 2023-07-18 02:15:15,755 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,,1689646515681.701a99e0c263b7c9b528d737e988902e. 2023-07-18 02:15:15,755 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689646515681.aef973101e2f6499975300e7250ac2ee. after waiting 0 ms 2023-07-18 02:15:15,755 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,,1689646515681.701a99e0c263b7c9b528d737e988902e. after waiting 0 ms 2023-07-18 02:15:15,755 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689646515681.aef973101e2f6499975300e7250ac2ee. 2023-07-18 02:15:15,755 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,i\xBF\x14i\xBE,1689646515681.aef973101e2f6499975300e7250ac2ee. 2023-07-18 02:15:15,755 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1558): Region close journal for aef973101e2f6499975300e7250ac2ee: 2023-07-18 02:15:15,755 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,,1689646515681.701a99e0c263b7c9b528d737e988902e. 2023-07-18 02:15:15,755 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,,1689646515681.701a99e0c263b7c9b528d737e988902e. 2023-07-18 02:15:15,755 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1558): Region close journal for 701a99e0c263b7c9b528d737e988902e: 2023-07-18 02:15:15,756 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(7675): creating {ENCODED => 67a6a461ca0a8fed3574ea2c030cb75e, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689646515681.67a6a461ca0a8fed3574ea2c030cb75e.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp 2023-07-18 02:15:15,756 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(7675): creating {ENCODED => 2f28a773589111cb78b99d6b81a6c044, NAME => 'Group_testDisabledTableMove,zzzzz,1689646515681.2f28a773589111cb78b99d6b81a6c044.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp 2023-07-18 02:15:15,771 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,zzzzz,1689646515681.2f28a773589111cb78b99d6b81a6c044.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 02:15:15,771 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1604): Closing 2f28a773589111cb78b99d6b81a6c044, disabling compactions & flushes 2023-07-18 02:15:15,771 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,zzzzz,1689646515681.2f28a773589111cb78b99d6b81a6c044. 2023-07-18 02:15:15,771 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,zzzzz,1689646515681.2f28a773589111cb78b99d6b81a6c044. 2023-07-18 02:15:15,771 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,zzzzz,1689646515681.2f28a773589111cb78b99d6b81a6c044. after waiting 0 ms 2023-07-18 02:15:15,771 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,zzzzz,1689646515681.2f28a773589111cb78b99d6b81a6c044. 2023-07-18 02:15:15,771 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,zzzzz,1689646515681.2f28a773589111cb78b99d6b81a6c044. 2023-07-18 02:15:15,771 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1558): Region close journal for 2f28a773589111cb78b99d6b81a6c044: 2023-07-18 02:15:15,785 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(1230): Checking to see if procedure is done pid=135 2023-07-18 02:15:15,986 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(1230): Checking to see if procedure is done pid=135 2023-07-18 02:15:16,140 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,aaaaa,1689646515681.779ca4a243943953951087355d31d5bb.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 02:15:16,140 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1604): Closing 779ca4a243943953951087355d31d5bb, disabling compactions & flushes 2023-07-18 02:15:16,140 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,aaaaa,1689646515681.779ca4a243943953951087355d31d5bb. 2023-07-18 02:15:16,140 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,aaaaa,1689646515681.779ca4a243943953951087355d31d5bb. 2023-07-18 02:15:16,140 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,aaaaa,1689646515681.779ca4a243943953951087355d31d5bb. after waiting 0 ms 2023-07-18 02:15:16,140 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,aaaaa,1689646515681.779ca4a243943953951087355d31d5bb. 2023-07-18 02:15:16,140 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,aaaaa,1689646515681.779ca4a243943953951087355d31d5bb. 2023-07-18 02:15:16,140 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1558): Region close journal for 779ca4a243943953951087355d31d5bb: 2023-07-18 02:15:16,172 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689646515681.67a6a461ca0a8fed3574ea2c030cb75e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 02:15:16,172 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1604): Closing 67a6a461ca0a8fed3574ea2c030cb75e, disabling compactions & flushes 2023-07-18 02:15:16,172 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689646515681.67a6a461ca0a8fed3574ea2c030cb75e. 2023-07-18 02:15:16,172 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689646515681.67a6a461ca0a8fed3574ea2c030cb75e. 2023-07-18 02:15:16,172 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689646515681.67a6a461ca0a8fed3574ea2c030cb75e. after waiting 0 ms 2023-07-18 02:15:16,172 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689646515681.67a6a461ca0a8fed3574ea2c030cb75e. 2023-07-18 02:15:16,172 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689646515681.67a6a461ca0a8fed3574ea2c030cb75e. 2023-07-18 02:15:16,172 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1558): Region close journal for 67a6a461ca0a8fed3574ea2c030cb75e: 2023-07-18 02:15:16,175 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=135, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_ADD_TO_META 2023-07-18 02:15:16,176 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689646515681.aef973101e2f6499975300e7250ac2ee.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689646516176"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689646516176"}]},"ts":"1689646516176"} 2023-07-18 02:15:16,176 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,,1689646515681.701a99e0c263b7c9b528d737e988902e.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689646516176"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689646516176"}]},"ts":"1689646516176"} 2023-07-18 02:15:16,176 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,zzzzz,1689646515681.2f28a773589111cb78b99d6b81a6c044.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689646516176"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689646516176"}]},"ts":"1689646516176"} 2023-07-18 02:15:16,176 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,aaaaa,1689646515681.779ca4a243943953951087355d31d5bb.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689646516176"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689646516176"}]},"ts":"1689646516176"} 2023-07-18 02:15:16,176 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689646515681.67a6a461ca0a8fed3574ea2c030cb75e.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689646516176"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689646516176"}]},"ts":"1689646516176"} 2023-07-18 02:15:16,178 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-18 02:15:16,179 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=135, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-18 02:15:16,179 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689646516179"}]},"ts":"1689646516179"} 2023-07-18 02:15:16,180 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=ENABLING in hbase:meta 2023-07-18 02:15:16,183 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 02:15:16,183 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 02:15:16,183 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 02:15:16,183 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 02:15:16,183 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=136, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=701a99e0c263b7c9b528d737e988902e, ASSIGN}, {pid=137, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=779ca4a243943953951087355d31d5bb, ASSIGN}, {pid=138, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=aef973101e2f6499975300e7250ac2ee, ASSIGN}, {pid=139, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=67a6a461ca0a8fed3574ea2c030cb75e, ASSIGN}, {pid=140, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=2f28a773589111cb78b99d6b81a6c044, ASSIGN}] 2023-07-18 02:15:16,187 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=140, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=2f28a773589111cb78b99d6b81a6c044, ASSIGN 2023-07-18 02:15:16,187 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=139, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=67a6a461ca0a8fed3574ea2c030cb75e, ASSIGN 2023-07-18 02:15:16,187 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=138, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=aef973101e2f6499975300e7250ac2ee, ASSIGN 2023-07-18 02:15:16,188 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=137, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=779ca4a243943953951087355d31d5bb, ASSIGN 2023-07-18 02:15:16,188 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=140, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=2f28a773589111cb78b99d6b81a6c044, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43645,1689646493716; forceNewPlan=false, retain=false 2023-07-18 02:15:16,188 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=136, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=701a99e0c263b7c9b528d737e988902e, ASSIGN 2023-07-18 02:15:16,188 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=139, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=67a6a461ca0a8fed3574ea2c030cb75e, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43645,1689646493716; forceNewPlan=false, retain=false 2023-07-18 02:15:16,189 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=137, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=779ca4a243943953951087355d31d5bb, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,45077,1689646489555; forceNewPlan=false, retain=false 2023-07-18 02:15:16,189 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=138, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=aef973101e2f6499975300e7250ac2ee, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43645,1689646493716; forceNewPlan=false, retain=false 2023-07-18 02:15:16,189 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=136, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=701a99e0c263b7c9b528d737e988902e, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,45077,1689646489555; forceNewPlan=false, retain=false 2023-07-18 02:15:16,259 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'unmovedTable' 2023-07-18 02:15:16,287 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(1230): Checking to see if procedure is done pid=135 2023-07-18 02:15:16,339 INFO [jenkins-hbase4:40909] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-18 02:15:16,342 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=140 updating hbase:meta row=2f28a773589111cb78b99d6b81a6c044, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43645,1689646493716 2023-07-18 02:15:16,342 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=138 updating hbase:meta row=aef973101e2f6499975300e7250ac2ee, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43645,1689646493716 2023-07-18 02:15:16,342 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=136 updating hbase:meta row=701a99e0c263b7c9b528d737e988902e, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45077,1689646489555 2023-07-18 02:15:16,343 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,zzzzz,1689646515681.2f28a773589111cb78b99d6b81a6c044.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689646516342"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689646516342"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689646516342"}]},"ts":"1689646516342"} 2023-07-18 02:15:16,342 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=139 updating hbase:meta row=67a6a461ca0a8fed3574ea2c030cb75e, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43645,1689646493716 2023-07-18 02:15:16,342 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=137 updating hbase:meta row=779ca4a243943953951087355d31d5bb, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45077,1689646489555 2023-07-18 02:15:16,343 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689646515681.67a6a461ca0a8fed3574ea2c030cb75e.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689646516342"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689646516342"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689646516342"}]},"ts":"1689646516342"} 2023-07-18 02:15:16,343 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,aaaaa,1689646515681.779ca4a243943953951087355d31d5bb.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689646516342"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689646516342"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689646516342"}]},"ts":"1689646516342"} 2023-07-18 02:15:16,343 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,,1689646515681.701a99e0c263b7c9b528d737e988902e.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689646516342"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689646516342"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689646516342"}]},"ts":"1689646516342"} 2023-07-18 02:15:16,343 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689646515681.aef973101e2f6499975300e7250ac2ee.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689646516342"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689646516342"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689646516342"}]},"ts":"1689646516342"} 2023-07-18 02:15:16,344 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=141, ppid=140, state=RUNNABLE; OpenRegionProcedure 2f28a773589111cb78b99d6b81a6c044, server=jenkins-hbase4.apache.org,43645,1689646493716}] 2023-07-18 02:15:16,345 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=142, ppid=139, state=RUNNABLE; OpenRegionProcedure 67a6a461ca0a8fed3574ea2c030cb75e, server=jenkins-hbase4.apache.org,43645,1689646493716}] 2023-07-18 02:15:16,346 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=143, ppid=137, state=RUNNABLE; OpenRegionProcedure 779ca4a243943953951087355d31d5bb, server=jenkins-hbase4.apache.org,45077,1689646489555}] 2023-07-18 02:15:16,346 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=144, ppid=136, state=RUNNABLE; OpenRegionProcedure 701a99e0c263b7c9b528d737e988902e, server=jenkins-hbase4.apache.org,45077,1689646489555}] 2023-07-18 02:15:16,350 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=145, ppid=138, state=RUNNABLE; OpenRegionProcedure aef973101e2f6499975300e7250ac2ee, server=jenkins-hbase4.apache.org,43645,1689646493716}] 2023-07-18 02:15:16,501 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689646515681.67a6a461ca0a8fed3574ea2c030cb75e. 2023-07-18 02:15:16,501 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 67a6a461ca0a8fed3574ea2c030cb75e, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689646515681.67a6a461ca0a8fed3574ea2c030cb75e.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-18 02:15:16,501 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 67a6a461ca0a8fed3574ea2c030cb75e 2023-07-18 02:15:16,501 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689646515681.67a6a461ca0a8fed3574ea2c030cb75e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 02:15:16,501 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 67a6a461ca0a8fed3574ea2c030cb75e 2023-07-18 02:15:16,501 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 67a6a461ca0a8fed3574ea2c030cb75e 2023-07-18 02:15:16,503 INFO [StoreOpener-67a6a461ca0a8fed3574ea2c030cb75e-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 67a6a461ca0a8fed3574ea2c030cb75e 2023-07-18 02:15:16,504 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,aaaaa,1689646515681.779ca4a243943953951087355d31d5bb. 2023-07-18 02:15:16,504 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 779ca4a243943953951087355d31d5bb, NAME => 'Group_testDisabledTableMove,aaaaa,1689646515681.779ca4a243943953951087355d31d5bb.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-18 02:15:16,504 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 779ca4a243943953951087355d31d5bb 2023-07-18 02:15:16,504 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,aaaaa,1689646515681.779ca4a243943953951087355d31d5bb.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 02:15:16,504 DEBUG [StoreOpener-67a6a461ca0a8fed3574ea2c030cb75e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testDisabledTableMove/67a6a461ca0a8fed3574ea2c030cb75e/f 2023-07-18 02:15:16,504 DEBUG [StoreOpener-67a6a461ca0a8fed3574ea2c030cb75e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testDisabledTableMove/67a6a461ca0a8fed3574ea2c030cb75e/f 2023-07-18 02:15:16,504 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 779ca4a243943953951087355d31d5bb 2023-07-18 02:15:16,504 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 779ca4a243943953951087355d31d5bb 2023-07-18 02:15:16,505 INFO [StoreOpener-67a6a461ca0a8fed3574ea2c030cb75e-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 67a6a461ca0a8fed3574ea2c030cb75e columnFamilyName f 2023-07-18 02:15:16,505 INFO [StoreOpener-67a6a461ca0a8fed3574ea2c030cb75e-1] regionserver.HStore(310): Store=67a6a461ca0a8fed3574ea2c030cb75e/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 02:15:16,505 INFO [StoreOpener-779ca4a243943953951087355d31d5bb-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 779ca4a243943953951087355d31d5bb 2023-07-18 02:15:16,506 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testDisabledTableMove/67a6a461ca0a8fed3574ea2c030cb75e 2023-07-18 02:15:16,506 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testDisabledTableMove/67a6a461ca0a8fed3574ea2c030cb75e 2023-07-18 02:15:16,507 DEBUG [StoreOpener-779ca4a243943953951087355d31d5bb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testDisabledTableMove/779ca4a243943953951087355d31d5bb/f 2023-07-18 02:15:16,507 DEBUG [StoreOpener-779ca4a243943953951087355d31d5bb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testDisabledTableMove/779ca4a243943953951087355d31d5bb/f 2023-07-18 02:15:16,507 INFO [StoreOpener-779ca4a243943953951087355d31d5bb-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 779ca4a243943953951087355d31d5bb columnFamilyName f 2023-07-18 02:15:16,508 INFO [StoreOpener-779ca4a243943953951087355d31d5bb-1] regionserver.HStore(310): Store=779ca4a243943953951087355d31d5bb/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 02:15:16,508 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testDisabledTableMove/779ca4a243943953951087355d31d5bb 2023-07-18 02:15:16,509 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testDisabledTableMove/779ca4a243943953951087355d31d5bb 2023-07-18 02:15:16,509 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 67a6a461ca0a8fed3574ea2c030cb75e 2023-07-18 02:15:16,511 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testDisabledTableMove/67a6a461ca0a8fed3574ea2c030cb75e/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 02:15:16,512 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 779ca4a243943953951087355d31d5bb 2023-07-18 02:15:16,512 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 67a6a461ca0a8fed3574ea2c030cb75e; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10261071040, jitterRate=-0.04436329007148743}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 02:15:16,512 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 67a6a461ca0a8fed3574ea2c030cb75e: 2023-07-18 02:15:16,513 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689646515681.67a6a461ca0a8fed3574ea2c030cb75e., pid=142, masterSystemTime=1689646516497 2023-07-18 02:15:16,514 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testDisabledTableMove/779ca4a243943953951087355d31d5bb/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 02:15:16,514 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689646515681.67a6a461ca0a8fed3574ea2c030cb75e. 2023-07-18 02:15:16,515 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689646515681.67a6a461ca0a8fed3574ea2c030cb75e. 2023-07-18 02:15:16,515 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,zzzzz,1689646515681.2f28a773589111cb78b99d6b81a6c044. 2023-07-18 02:15:16,515 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 779ca4a243943953951087355d31d5bb; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9452202080, jitterRate=-0.11969508230686188}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 02:15:16,515 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 2f28a773589111cb78b99d6b81a6c044, NAME => 'Group_testDisabledTableMove,zzzzz,1689646515681.2f28a773589111cb78b99d6b81a6c044.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-18 02:15:16,515 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 779ca4a243943953951087355d31d5bb: 2023-07-18 02:15:16,515 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=139 updating hbase:meta row=67a6a461ca0a8fed3574ea2c030cb75e, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43645,1689646493716 2023-07-18 02:15:16,515 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689646515681.67a6a461ca0a8fed3574ea2c030cb75e.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689646516515"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689646516515"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689646516515"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689646516515"}]},"ts":"1689646516515"} 2023-07-18 02:15:16,515 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,aaaaa,1689646515681.779ca4a243943953951087355d31d5bb., pid=143, masterSystemTime=1689646516500 2023-07-18 02:15:16,515 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 2f28a773589111cb78b99d6b81a6c044 2023-07-18 02:15:16,516 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,zzzzz,1689646515681.2f28a773589111cb78b99d6b81a6c044.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 02:15:16,516 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 2f28a773589111cb78b99d6b81a6c044 2023-07-18 02:15:16,516 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 2f28a773589111cb78b99d6b81a6c044 2023-07-18 02:15:16,517 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,aaaaa,1689646515681.779ca4a243943953951087355d31d5bb. 2023-07-18 02:15:16,517 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,aaaaa,1689646515681.779ca4a243943953951087355d31d5bb. 2023-07-18 02:15:16,517 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,,1689646515681.701a99e0c263b7c9b528d737e988902e. 2023-07-18 02:15:16,517 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 701a99e0c263b7c9b528d737e988902e, NAME => 'Group_testDisabledTableMove,,1689646515681.701a99e0c263b7c9b528d737e988902e.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-18 02:15:16,517 INFO [StoreOpener-2f28a773589111cb78b99d6b81a6c044-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 2f28a773589111cb78b99d6b81a6c044 2023-07-18 02:15:16,517 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 701a99e0c263b7c9b528d737e988902e 2023-07-18 02:15:16,517 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,,1689646515681.701a99e0c263b7c9b528d737e988902e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 02:15:16,517 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 701a99e0c263b7c9b528d737e988902e 2023-07-18 02:15:16,517 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 701a99e0c263b7c9b528d737e988902e 2023-07-18 02:15:16,518 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=137 updating hbase:meta row=779ca4a243943953951087355d31d5bb, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,45077,1689646489555 2023-07-18 02:15:16,518 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,aaaaa,1689646515681.779ca4a243943953951087355d31d5bb.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689646516518"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689646516518"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689646516518"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689646516518"}]},"ts":"1689646516518"} 2023-07-18 02:15:16,519 DEBUG [StoreOpener-2f28a773589111cb78b99d6b81a6c044-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testDisabledTableMove/2f28a773589111cb78b99d6b81a6c044/f 2023-07-18 02:15:16,519 DEBUG [StoreOpener-2f28a773589111cb78b99d6b81a6c044-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testDisabledTableMove/2f28a773589111cb78b99d6b81a6c044/f 2023-07-18 02:15:16,519 INFO [StoreOpener-701a99e0c263b7c9b528d737e988902e-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 701a99e0c263b7c9b528d737e988902e 2023-07-18 02:15:16,519 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=142, resume processing ppid=139 2023-07-18 02:15:16,519 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=142, ppid=139, state=SUCCESS; OpenRegionProcedure 67a6a461ca0a8fed3574ea2c030cb75e, server=jenkins-hbase4.apache.org,43645,1689646493716 in 172 msec 2023-07-18 02:15:16,519 INFO [StoreOpener-2f28a773589111cb78b99d6b81a6c044-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 2f28a773589111cb78b99d6b81a6c044 columnFamilyName f 2023-07-18 02:15:16,520 INFO [StoreOpener-2f28a773589111cb78b99d6b81a6c044-1] regionserver.HStore(310): Store=2f28a773589111cb78b99d6b81a6c044/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 02:15:16,521 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testDisabledTableMove/2f28a773589111cb78b99d6b81a6c044 2023-07-18 02:15:16,521 DEBUG [StoreOpener-701a99e0c263b7c9b528d737e988902e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testDisabledTableMove/701a99e0c263b7c9b528d737e988902e/f 2023-07-18 02:15:16,521 DEBUG [StoreOpener-701a99e0c263b7c9b528d737e988902e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testDisabledTableMove/701a99e0c263b7c9b528d737e988902e/f 2023-07-18 02:15:16,521 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testDisabledTableMove/2f28a773589111cb78b99d6b81a6c044 2023-07-18 02:15:16,521 INFO [StoreOpener-701a99e0c263b7c9b528d737e988902e-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 701a99e0c263b7c9b528d737e988902e columnFamilyName f 2023-07-18 02:15:16,522 INFO [StoreOpener-701a99e0c263b7c9b528d737e988902e-1] regionserver.HStore(310): Store=701a99e0c263b7c9b528d737e988902e/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 02:15:16,523 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testDisabledTableMove/701a99e0c263b7c9b528d737e988902e 2023-07-18 02:15:16,523 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=139, ppid=135, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=67a6a461ca0a8fed3574ea2c030cb75e, ASSIGN in 336 msec 2023-07-18 02:15:16,523 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testDisabledTableMove/701a99e0c263b7c9b528d737e988902e 2023-07-18 02:15:16,524 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=143, resume processing ppid=137 2023-07-18 02:15:16,524 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=143, ppid=137, state=SUCCESS; OpenRegionProcedure 779ca4a243943953951087355d31d5bb, server=jenkins-hbase4.apache.org,45077,1689646489555 in 173 msec 2023-07-18 02:15:16,524 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 2f28a773589111cb78b99d6b81a6c044 2023-07-18 02:15:16,525 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=137, ppid=135, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=779ca4a243943953951087355d31d5bb, ASSIGN in 341 msec 2023-07-18 02:15:16,526 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testDisabledTableMove/2f28a773589111cb78b99d6b81a6c044/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 02:15:16,527 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 701a99e0c263b7c9b528d737e988902e 2023-07-18 02:15:16,527 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 2f28a773589111cb78b99d6b81a6c044; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9871699360, jitterRate=-0.08062635362148285}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 02:15:16,527 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 2f28a773589111cb78b99d6b81a6c044: 2023-07-18 02:15:16,529 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testDisabledTableMove/701a99e0c263b7c9b528d737e988902e/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 02:15:16,529 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,zzzzz,1689646515681.2f28a773589111cb78b99d6b81a6c044., pid=141, masterSystemTime=1689646516497 2023-07-18 02:15:16,529 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 701a99e0c263b7c9b528d737e988902e; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11040349120, jitterRate=0.028212636709213257}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 02:15:16,529 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 701a99e0c263b7c9b528d737e988902e: 2023-07-18 02:15:16,530 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,,1689646515681.701a99e0c263b7c9b528d737e988902e., pid=144, masterSystemTime=1689646516500 2023-07-18 02:15:16,530 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,zzzzz,1689646515681.2f28a773589111cb78b99d6b81a6c044. 2023-07-18 02:15:16,530 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,zzzzz,1689646515681.2f28a773589111cb78b99d6b81a6c044. 2023-07-18 02:15:16,530 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,i\xBF\x14i\xBE,1689646515681.aef973101e2f6499975300e7250ac2ee. 2023-07-18 02:15:16,530 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => aef973101e2f6499975300e7250ac2ee, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689646515681.aef973101e2f6499975300e7250ac2ee.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-18 02:15:16,531 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove aef973101e2f6499975300e7250ac2ee 2023-07-18 02:15:16,531 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=140 updating hbase:meta row=2f28a773589111cb78b99d6b81a6c044, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43645,1689646493716 2023-07-18 02:15:16,531 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,i\xBF\x14i\xBE,1689646515681.aef973101e2f6499975300e7250ac2ee.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 02:15:16,531 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for aef973101e2f6499975300e7250ac2ee 2023-07-18 02:15:16,531 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,zzzzz,1689646515681.2f28a773589111cb78b99d6b81a6c044.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689646516531"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689646516531"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689646516531"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689646516531"}]},"ts":"1689646516531"} 2023-07-18 02:15:16,531 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for aef973101e2f6499975300e7250ac2ee 2023-07-18 02:15:16,531 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,,1689646515681.701a99e0c263b7c9b528d737e988902e. 2023-07-18 02:15:16,531 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,,1689646515681.701a99e0c263b7c9b528d737e988902e. 2023-07-18 02:15:16,532 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=136 updating hbase:meta row=701a99e0c263b7c9b528d737e988902e, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,45077,1689646489555 2023-07-18 02:15:16,533 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,,1689646515681.701a99e0c263b7c9b528d737e988902e.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689646516532"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689646516532"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689646516532"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689646516532"}]},"ts":"1689646516532"} 2023-07-18 02:15:16,533 INFO [StoreOpener-aef973101e2f6499975300e7250ac2ee-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region aef973101e2f6499975300e7250ac2ee 2023-07-18 02:15:16,535 DEBUG [StoreOpener-aef973101e2f6499975300e7250ac2ee-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testDisabledTableMove/aef973101e2f6499975300e7250ac2ee/f 2023-07-18 02:15:16,535 DEBUG [StoreOpener-aef973101e2f6499975300e7250ac2ee-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testDisabledTableMove/aef973101e2f6499975300e7250ac2ee/f 2023-07-18 02:15:16,535 INFO [StoreOpener-aef973101e2f6499975300e7250ac2ee-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region aef973101e2f6499975300e7250ac2ee columnFamilyName f 2023-07-18 02:15:16,536 INFO [StoreOpener-aef973101e2f6499975300e7250ac2ee-1] regionserver.HStore(310): Store=aef973101e2f6499975300e7250ac2ee/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 02:15:16,536 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=141, resume processing ppid=140 2023-07-18 02:15:16,536 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=141, ppid=140, state=SUCCESS; OpenRegionProcedure 2f28a773589111cb78b99d6b81a6c044, server=jenkins-hbase4.apache.org,43645,1689646493716 in 189 msec 2023-07-18 02:15:16,536 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=144, resume processing ppid=136 2023-07-18 02:15:16,537 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=144, ppid=136, state=SUCCESS; OpenRegionProcedure 701a99e0c263b7c9b528d737e988902e, server=jenkins-hbase4.apache.org,45077,1689646489555 in 188 msec 2023-07-18 02:15:16,537 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testDisabledTableMove/aef973101e2f6499975300e7250ac2ee 2023-07-18 02:15:16,538 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testDisabledTableMove/aef973101e2f6499975300e7250ac2ee 2023-07-18 02:15:16,538 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=136, ppid=135, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=701a99e0c263b7c9b528d737e988902e, ASSIGN in 354 msec 2023-07-18 02:15:16,538 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=140, ppid=135, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=2f28a773589111cb78b99d6b81a6c044, ASSIGN in 353 msec 2023-07-18 02:15:16,541 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for aef973101e2f6499975300e7250ac2ee 2023-07-18 02:15:16,543 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testDisabledTableMove/aef973101e2f6499975300e7250ac2ee/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 02:15:16,544 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened aef973101e2f6499975300e7250ac2ee; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11073421600, jitterRate=0.03129275143146515}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 02:15:16,544 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for aef973101e2f6499975300e7250ac2ee: 2023-07-18 02:15:16,544 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,i\xBF\x14i\xBE,1689646515681.aef973101e2f6499975300e7250ac2ee., pid=145, masterSystemTime=1689646516497 2023-07-18 02:15:16,546 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,i\xBF\x14i\xBE,1689646515681.aef973101e2f6499975300e7250ac2ee. 2023-07-18 02:15:16,546 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,i\xBF\x14i\xBE,1689646515681.aef973101e2f6499975300e7250ac2ee. 2023-07-18 02:15:16,546 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=138 updating hbase:meta row=aef973101e2f6499975300e7250ac2ee, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43645,1689646493716 2023-07-18 02:15:16,546 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689646515681.aef973101e2f6499975300e7250ac2ee.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689646516546"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689646516546"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689646516546"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689646516546"}]},"ts":"1689646516546"} 2023-07-18 02:15:16,549 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=145, resume processing ppid=138 2023-07-18 02:15:16,549 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=145, ppid=138, state=SUCCESS; OpenRegionProcedure aef973101e2f6499975300e7250ac2ee, server=jenkins-hbase4.apache.org,43645,1689646493716 in 199 msec 2023-07-18 02:15:16,550 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=138, resume processing ppid=135 2023-07-18 02:15:16,550 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=138, ppid=135, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=aef973101e2f6499975300e7250ac2ee, ASSIGN in 366 msec 2023-07-18 02:15:16,551 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=135, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-18 02:15:16,551 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689646516551"}]},"ts":"1689646516551"} 2023-07-18 02:15:16,552 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=ENABLED in hbase:meta 2023-07-18 02:15:16,554 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=135, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_POST_OPERATION 2023-07-18 02:15:16,555 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=135, state=SUCCESS; CreateTableProcedure table=Group_testDisabledTableMove in 873 msec 2023-07-18 02:15:16,788 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(1230): Checking to see if procedure is done pid=135 2023-07-18 02:15:16,788 INFO [Listener at localhost/38101] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testDisabledTableMove, procId: 135 completed 2023-07-18 02:15:16,788 DEBUG [Listener at localhost/38101] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testDisabledTableMove get assigned. Timeout = 60000ms 2023-07-18 02:15:16,788 INFO [Listener at localhost/38101] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 02:15:16,792 INFO [Listener at localhost/38101] hbase.HBaseTestingUtility(3484): All regions for table Group_testDisabledTableMove assigned to meta. Checking AM states. 2023-07-18 02:15:16,792 INFO [Listener at localhost/38101] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 02:15:16,792 INFO [Listener at localhost/38101] hbase.HBaseTestingUtility(3504): All regions for table Group_testDisabledTableMove assigned. 2023-07-18 02:15:16,793 INFO [Listener at localhost/38101] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 02:15:16,798 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testDisabledTableMove 2023-07-18 02:15:16,799 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 02:15:16,799 INFO [Listener at localhost/38101] client.HBaseAdmin$15(890): Started disable of Group_testDisabledTableMove 2023-07-18 02:15:16,799 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testDisabledTableMove 2023-07-18 02:15:16,800 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] procedure2.ProcedureExecutor(1029): Stored pid=146, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testDisabledTableMove 2023-07-18 02:15:16,803 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(1230): Checking to see if procedure is done pid=146 2023-07-18 02:15:16,803 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689646516803"}]},"ts":"1689646516803"} 2023-07-18 02:15:16,804 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=DISABLING in hbase:meta 2023-07-18 02:15:16,807 INFO [PEWorker-1] procedure.DisableTableProcedure(293): Set Group_testDisabledTableMove to state=DISABLING 2023-07-18 02:15:16,808 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=147, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=701a99e0c263b7c9b528d737e988902e, UNASSIGN}, {pid=148, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=779ca4a243943953951087355d31d5bb, UNASSIGN}, {pid=149, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=aef973101e2f6499975300e7250ac2ee, UNASSIGN}, {pid=150, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=67a6a461ca0a8fed3574ea2c030cb75e, UNASSIGN}, {pid=151, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=2f28a773589111cb78b99d6b81a6c044, UNASSIGN}] 2023-07-18 02:15:16,808 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=147, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=701a99e0c263b7c9b528d737e988902e, UNASSIGN 2023-07-18 02:15:16,808 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=148, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=779ca4a243943953951087355d31d5bb, UNASSIGN 2023-07-18 02:15:16,809 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=150, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=67a6a461ca0a8fed3574ea2c030cb75e, UNASSIGN 2023-07-18 02:15:16,809 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=149, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=aef973101e2f6499975300e7250ac2ee, UNASSIGN 2023-07-18 02:15:16,809 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=151, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=2f28a773589111cb78b99d6b81a6c044, UNASSIGN 2023-07-18 02:15:16,809 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=147 updating hbase:meta row=701a99e0c263b7c9b528d737e988902e, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,45077,1689646489555 2023-07-18 02:15:16,809 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=148 updating hbase:meta row=779ca4a243943953951087355d31d5bb, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,45077,1689646489555 2023-07-18 02:15:16,810 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,,1689646515681.701a99e0c263b7c9b528d737e988902e.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689646516809"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689646516809"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689646516809"}]},"ts":"1689646516809"} 2023-07-18 02:15:16,810 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,aaaaa,1689646515681.779ca4a243943953951087355d31d5bb.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689646516809"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689646516809"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689646516809"}]},"ts":"1689646516809"} 2023-07-18 02:15:16,810 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=150 updating hbase:meta row=67a6a461ca0a8fed3574ea2c030cb75e, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43645,1689646493716 2023-07-18 02:15:16,810 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=149 updating hbase:meta row=aef973101e2f6499975300e7250ac2ee, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43645,1689646493716 2023-07-18 02:15:16,810 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=151 updating hbase:meta row=2f28a773589111cb78b99d6b81a6c044, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43645,1689646493716 2023-07-18 02:15:16,810 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689646515681.aef973101e2f6499975300e7250ac2ee.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689646516810"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689646516810"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689646516810"}]},"ts":"1689646516810"} 2023-07-18 02:15:16,810 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,zzzzz,1689646515681.2f28a773589111cb78b99d6b81a6c044.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689646516810"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689646516810"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689646516810"}]},"ts":"1689646516810"} 2023-07-18 02:15:16,810 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689646515681.67a6a461ca0a8fed3574ea2c030cb75e.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689646516810"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689646516810"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689646516810"}]},"ts":"1689646516810"} 2023-07-18 02:15:16,811 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=152, ppid=147, state=RUNNABLE; CloseRegionProcedure 701a99e0c263b7c9b528d737e988902e, server=jenkins-hbase4.apache.org,45077,1689646489555}] 2023-07-18 02:15:16,811 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=153, ppid=148, state=RUNNABLE; CloseRegionProcedure 779ca4a243943953951087355d31d5bb, server=jenkins-hbase4.apache.org,45077,1689646489555}] 2023-07-18 02:15:16,812 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=154, ppid=149, state=RUNNABLE; CloseRegionProcedure aef973101e2f6499975300e7250ac2ee, server=jenkins-hbase4.apache.org,43645,1689646493716}] 2023-07-18 02:15:16,813 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=155, ppid=151, state=RUNNABLE; CloseRegionProcedure 2f28a773589111cb78b99d6b81a6c044, server=jenkins-hbase4.apache.org,43645,1689646493716}] 2023-07-18 02:15:16,813 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=156, ppid=150, state=RUNNABLE; CloseRegionProcedure 67a6a461ca0a8fed3574ea2c030cb75e, server=jenkins-hbase4.apache.org,43645,1689646493716}] 2023-07-18 02:15:16,904 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(1230): Checking to see if procedure is done pid=146 2023-07-18 02:15:16,963 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 779ca4a243943953951087355d31d5bb 2023-07-18 02:15:16,964 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 779ca4a243943953951087355d31d5bb, disabling compactions & flushes 2023-07-18 02:15:16,964 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,aaaaa,1689646515681.779ca4a243943953951087355d31d5bb. 2023-07-18 02:15:16,965 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,aaaaa,1689646515681.779ca4a243943953951087355d31d5bb. 2023-07-18 02:15:16,965 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,aaaaa,1689646515681.779ca4a243943953951087355d31d5bb. after waiting 0 ms 2023-07-18 02:15:16,965 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,aaaaa,1689646515681.779ca4a243943953951087355d31d5bb. 2023-07-18 02:15:16,965 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 67a6a461ca0a8fed3574ea2c030cb75e 2023-07-18 02:15:16,966 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 67a6a461ca0a8fed3574ea2c030cb75e, disabling compactions & flushes 2023-07-18 02:15:16,966 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689646515681.67a6a461ca0a8fed3574ea2c030cb75e. 2023-07-18 02:15:16,966 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689646515681.67a6a461ca0a8fed3574ea2c030cb75e. 2023-07-18 02:15:16,966 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689646515681.67a6a461ca0a8fed3574ea2c030cb75e. after waiting 0 ms 2023-07-18 02:15:16,966 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689646515681.67a6a461ca0a8fed3574ea2c030cb75e. 2023-07-18 02:15:16,969 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testDisabledTableMove/779ca4a243943953951087355d31d5bb/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 02:15:16,969 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testDisabledTableMove/67a6a461ca0a8fed3574ea2c030cb75e/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 02:15:16,970 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,aaaaa,1689646515681.779ca4a243943953951087355d31d5bb. 2023-07-18 02:15:16,970 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 779ca4a243943953951087355d31d5bb: 2023-07-18 02:15:16,970 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689646515681.67a6a461ca0a8fed3574ea2c030cb75e. 2023-07-18 02:15:16,970 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 67a6a461ca0a8fed3574ea2c030cb75e: 2023-07-18 02:15:16,971 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 779ca4a243943953951087355d31d5bb 2023-07-18 02:15:16,972 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 701a99e0c263b7c9b528d737e988902e 2023-07-18 02:15:16,972 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 701a99e0c263b7c9b528d737e988902e, disabling compactions & flushes 2023-07-18 02:15:16,973 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,,1689646515681.701a99e0c263b7c9b528d737e988902e. 2023-07-18 02:15:16,973 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,,1689646515681.701a99e0c263b7c9b528d737e988902e. 2023-07-18 02:15:16,973 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,,1689646515681.701a99e0c263b7c9b528d737e988902e. after waiting 0 ms 2023-07-18 02:15:16,973 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,,1689646515681.701a99e0c263b7c9b528d737e988902e. 2023-07-18 02:15:16,973 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=148 updating hbase:meta row=779ca4a243943953951087355d31d5bb, regionState=CLOSED 2023-07-18 02:15:16,973 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,aaaaa,1689646515681.779ca4a243943953951087355d31d5bb.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689646516973"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689646516973"}]},"ts":"1689646516973"} 2023-07-18 02:15:16,973 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 67a6a461ca0a8fed3574ea2c030cb75e 2023-07-18 02:15:16,973 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close aef973101e2f6499975300e7250ac2ee 2023-07-18 02:15:16,974 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing aef973101e2f6499975300e7250ac2ee, disabling compactions & flushes 2023-07-18 02:15:16,974 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689646515681.aef973101e2f6499975300e7250ac2ee. 2023-07-18 02:15:16,974 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689646515681.aef973101e2f6499975300e7250ac2ee. 2023-07-18 02:15:16,974 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689646515681.aef973101e2f6499975300e7250ac2ee. after waiting 0 ms 2023-07-18 02:15:16,974 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689646515681.aef973101e2f6499975300e7250ac2ee. 2023-07-18 02:15:16,974 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=150 updating hbase:meta row=67a6a461ca0a8fed3574ea2c030cb75e, regionState=CLOSED 2023-07-18 02:15:16,975 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689646515681.67a6a461ca0a8fed3574ea2c030cb75e.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689646516974"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689646516974"}]},"ts":"1689646516974"} 2023-07-18 02:15:16,978 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=153, resume processing ppid=148 2023-07-18 02:15:16,978 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=153, ppid=148, state=SUCCESS; CloseRegionProcedure 779ca4a243943953951087355d31d5bb, server=jenkins-hbase4.apache.org,45077,1689646489555 in 165 msec 2023-07-18 02:15:16,979 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=156, resume processing ppid=150 2023-07-18 02:15:16,979 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=156, ppid=150, state=SUCCESS; CloseRegionProcedure 67a6a461ca0a8fed3574ea2c030cb75e, server=jenkins-hbase4.apache.org,43645,1689646493716 in 163 msec 2023-07-18 02:15:16,979 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=148, ppid=146, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=779ca4a243943953951087355d31d5bb, UNASSIGN in 171 msec 2023-07-18 02:15:16,981 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=150, ppid=146, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=67a6a461ca0a8fed3574ea2c030cb75e, UNASSIGN in 172 msec 2023-07-18 02:15:16,982 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testDisabledTableMove/701a99e0c263b7c9b528d737e988902e/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 02:15:16,983 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,,1689646515681.701a99e0c263b7c9b528d737e988902e. 2023-07-18 02:15:16,983 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 701a99e0c263b7c9b528d737e988902e: 2023-07-18 02:15:16,984 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 701a99e0c263b7c9b528d737e988902e 2023-07-18 02:15:16,984 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=147 updating hbase:meta row=701a99e0c263b7c9b528d737e988902e, regionState=CLOSED 2023-07-18 02:15:16,984 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testDisabledTableMove/aef973101e2f6499975300e7250ac2ee/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 02:15:16,984 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,,1689646515681.701a99e0c263b7c9b528d737e988902e.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689646516984"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689646516984"}]},"ts":"1689646516984"} 2023-07-18 02:15:16,985 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,i\xBF\x14i\xBE,1689646515681.aef973101e2f6499975300e7250ac2ee. 2023-07-18 02:15:16,985 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for aef973101e2f6499975300e7250ac2ee: 2023-07-18 02:15:16,986 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed aef973101e2f6499975300e7250ac2ee 2023-07-18 02:15:16,986 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 2f28a773589111cb78b99d6b81a6c044 2023-07-18 02:15:16,987 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 2f28a773589111cb78b99d6b81a6c044, disabling compactions & flushes 2023-07-18 02:15:16,987 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,zzzzz,1689646515681.2f28a773589111cb78b99d6b81a6c044. 2023-07-18 02:15:16,987 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,zzzzz,1689646515681.2f28a773589111cb78b99d6b81a6c044. 2023-07-18 02:15:16,987 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=149 updating hbase:meta row=aef973101e2f6499975300e7250ac2ee, regionState=CLOSED 2023-07-18 02:15:16,987 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,zzzzz,1689646515681.2f28a773589111cb78b99d6b81a6c044. after waiting 0 ms 2023-07-18 02:15:16,987 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,zzzzz,1689646515681.2f28a773589111cb78b99d6b81a6c044. 2023-07-18 02:15:16,988 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689646515681.aef973101e2f6499975300e7250ac2ee.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689646516987"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689646516987"}]},"ts":"1689646516987"} 2023-07-18 02:15:16,988 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=152, resume processing ppid=147 2023-07-18 02:15:16,988 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=152, ppid=147, state=SUCCESS; CloseRegionProcedure 701a99e0c263b7c9b528d737e988902e, server=jenkins-hbase4.apache.org,45077,1689646489555 in 174 msec 2023-07-18 02:15:16,989 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=147, ppid=146, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=701a99e0c263b7c9b528d737e988902e, UNASSIGN in 181 msec 2023-07-18 02:15:16,990 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=154, resume processing ppid=149 2023-07-18 02:15:16,990 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=154, ppid=149, state=SUCCESS; CloseRegionProcedure aef973101e2f6499975300e7250ac2ee, server=jenkins-hbase4.apache.org,43645,1689646493716 in 177 msec 2023-07-18 02:15:16,991 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/Group_testDisabledTableMove/2f28a773589111cb78b99d6b81a6c044/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 02:15:16,991 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=149, ppid=146, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=aef973101e2f6499975300e7250ac2ee, UNASSIGN in 183 msec 2023-07-18 02:15:16,992 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,zzzzz,1689646515681.2f28a773589111cb78b99d6b81a6c044. 2023-07-18 02:15:16,992 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 2f28a773589111cb78b99d6b81a6c044: 2023-07-18 02:15:16,993 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 2f28a773589111cb78b99d6b81a6c044 2023-07-18 02:15:16,993 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=151 updating hbase:meta row=2f28a773589111cb78b99d6b81a6c044, regionState=CLOSED 2023-07-18 02:15:16,993 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,zzzzz,1689646515681.2f28a773589111cb78b99d6b81a6c044.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689646516993"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689646516993"}]},"ts":"1689646516993"} 2023-07-18 02:15:16,996 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=155, resume processing ppid=151 2023-07-18 02:15:16,996 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=155, ppid=151, state=SUCCESS; CloseRegionProcedure 2f28a773589111cb78b99d6b81a6c044, server=jenkins-hbase4.apache.org,43645,1689646493716 in 182 msec 2023-07-18 02:15:16,997 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=151, resume processing ppid=146 2023-07-18 02:15:16,998 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=151, ppid=146, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=2f28a773589111cb78b99d6b81a6c044, UNASSIGN in 189 msec 2023-07-18 02:15:16,998 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689646516998"}]},"ts":"1689646516998"} 2023-07-18 02:15:16,999 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=DISABLED in hbase:meta 2023-07-18 02:15:17,001 INFO [PEWorker-5] procedure.DisableTableProcedure(305): Set Group_testDisabledTableMove to state=DISABLED 2023-07-18 02:15:17,002 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=146, state=SUCCESS; DisableTableProcedure table=Group_testDisabledTableMove in 201 msec 2023-07-18 02:15:17,105 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(1230): Checking to see if procedure is done pid=146 2023-07-18 02:15:17,105 INFO [Listener at localhost/38101] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testDisabledTableMove, procId: 146 completed 2023-07-18 02:15:17,106 INFO [Listener at localhost/38101] rsgroup.TestRSGroupsAdmin1(370): Moving table Group_testDisabledTableMove to Group_testDisabledTableMove_93749856 2023-07-18 02:15:17,108 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testDisabledTableMove] to rsgroup Group_testDisabledTableMove_93749856 2023-07-18 02:15:17,110 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:17,110 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_93749856 2023-07-18 02:15:17,110 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 02:15:17,111 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 02:15:17,119 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(336): Skipping move regions because the table Group_testDisabledTableMove is disabled 2023-07-18 02:15:17,119 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testDisabledTableMove_93749856, current retry=0 2023-07-18 02:15:17,119 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testDisabledTableMove] moved to target group Group_testDisabledTableMove_93749856. 2023-07-18 02:15:17,119 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 02:15:17,121 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:17,121 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:17,124 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testDisabledTableMove 2023-07-18 02:15:17,124 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 02:15:17,125 INFO [Listener at localhost/38101] client.HBaseAdmin$15(890): Started disable of Group_testDisabledTableMove 2023-07-18 02:15:17,126 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testDisabledTableMove 2023-07-18 02:15:17,126 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.TableNotEnabledException: Group_testDisabledTableMove at org.apache.hadoop.hbase.master.procedure.AbstractStateMachineTableProcedure.preflightChecks(AbstractStateMachineTableProcedure.java:163) at org.apache.hadoop.hbase.master.procedure.DisableTableProcedure.(DisableTableProcedure.java:78) at org.apache.hadoop.hbase.master.HMaster$11.run(HMaster.java:2429) at org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:132) at org.apache.hadoop.hbase.master.HMaster.disableTable(HMaster.java:2413) at org.apache.hadoop.hbase.master.MasterRpcServices.disableTable(MasterRpcServices.java:787) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 02:15:17,126 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] ipc.CallRunner(144): callId: 922 service: MasterService methodName: DisableTable size: 88 connection: 172.31.14.131:39122 deadline: 1689646577126, exception=org.apache.hadoop.hbase.TableNotEnabledException: Group_testDisabledTableMove 2023-07-18 02:15:17,127 DEBUG [Listener at localhost/38101] hbase.HBaseTestingUtility(1826): Table: Group_testDisabledTableMove already disabled, so just deleting it. 2023-07-18 02:15:17,127 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testDisabledTableMove 2023-07-18 02:15:17,128 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] procedure2.ProcedureExecutor(1029): Stored pid=158, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-18 02:15:17,130 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=158, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-18 02:15:17,130 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testDisabledTableMove' from rsgroup 'Group_testDisabledTableMove_93749856' 2023-07-18 02:15:17,131 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=158, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-18 02:15:17,132 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:17,132 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_93749856 2023-07-18 02:15:17,133 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 02:15:17,133 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 02:15:17,136 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(1230): Checking to see if procedure is done pid=158 2023-07-18 02:15:17,138 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testDisabledTableMove/701a99e0c263b7c9b528d737e988902e 2023-07-18 02:15:17,138 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testDisabledTableMove/2f28a773589111cb78b99d6b81a6c044 2023-07-18 02:15:17,138 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testDisabledTableMove/aef973101e2f6499975300e7250ac2ee 2023-07-18 02:15:17,138 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testDisabledTableMove/67a6a461ca0a8fed3574ea2c030cb75e 2023-07-18 02:15:17,138 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testDisabledTableMove/779ca4a243943953951087355d31d5bb 2023-07-18 02:15:17,141 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testDisabledTableMove/779ca4a243943953951087355d31d5bb/f, FileablePath, hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testDisabledTableMove/779ca4a243943953951087355d31d5bb/recovered.edits] 2023-07-18 02:15:17,141 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testDisabledTableMove/aef973101e2f6499975300e7250ac2ee/f, FileablePath, hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testDisabledTableMove/aef973101e2f6499975300e7250ac2ee/recovered.edits] 2023-07-18 02:15:17,141 DEBUG [HFileArchiver-7] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testDisabledTableMove/701a99e0c263b7c9b528d737e988902e/f, FileablePath, hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testDisabledTableMove/701a99e0c263b7c9b528d737e988902e/recovered.edits] 2023-07-18 02:15:17,141 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testDisabledTableMove/67a6a461ca0a8fed3574ea2c030cb75e/f, FileablePath, hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testDisabledTableMove/67a6a461ca0a8fed3574ea2c030cb75e/recovered.edits] 2023-07-18 02:15:17,142 DEBUG [HFileArchiver-6] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testDisabledTableMove/2f28a773589111cb78b99d6b81a6c044/f, FileablePath, hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testDisabledTableMove/2f28a773589111cb78b99d6b81a6c044/recovered.edits] 2023-07-18 02:15:17,150 DEBUG [HFileArchiver-7] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testDisabledTableMove/701a99e0c263b7c9b528d737e988902e/recovered.edits/4.seqid to hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/archive/data/default/Group_testDisabledTableMove/701a99e0c263b7c9b528d737e988902e/recovered.edits/4.seqid 2023-07-18 02:15:17,150 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testDisabledTableMove/aef973101e2f6499975300e7250ac2ee/recovered.edits/4.seqid to hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/archive/data/default/Group_testDisabledTableMove/aef973101e2f6499975300e7250ac2ee/recovered.edits/4.seqid 2023-07-18 02:15:17,151 DEBUG [HFileArchiver-7] backup.HFileArchiver(596): Deleted hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testDisabledTableMove/701a99e0c263b7c9b528d737e988902e 2023-07-18 02:15:17,151 DEBUG [HFileArchiver-6] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testDisabledTableMove/2f28a773589111cb78b99d6b81a6c044/recovered.edits/4.seqid to hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/archive/data/default/Group_testDisabledTableMove/2f28a773589111cb78b99d6b81a6c044/recovered.edits/4.seqid 2023-07-18 02:15:17,151 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testDisabledTableMove/67a6a461ca0a8fed3574ea2c030cb75e/recovered.edits/4.seqid to hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/archive/data/default/Group_testDisabledTableMove/67a6a461ca0a8fed3574ea2c030cb75e/recovered.edits/4.seqid 2023-07-18 02:15:17,152 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testDisabledTableMove/779ca4a243943953951087355d31d5bb/recovered.edits/4.seqid to hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/archive/data/default/Group_testDisabledTableMove/779ca4a243943953951087355d31d5bb/recovered.edits/4.seqid 2023-07-18 02:15:17,152 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testDisabledTableMove/aef973101e2f6499975300e7250ac2ee 2023-07-18 02:15:17,152 DEBUG [HFileArchiver-6] backup.HFileArchiver(596): Deleted hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testDisabledTableMove/2f28a773589111cb78b99d6b81a6c044 2023-07-18 02:15:17,152 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testDisabledTableMove/67a6a461ca0a8fed3574ea2c030cb75e 2023-07-18 02:15:17,152 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/.tmp/data/default/Group_testDisabledTableMove/779ca4a243943953951087355d31d5bb 2023-07-18 02:15:17,153 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived Group_testDisabledTableMove regions 2023-07-18 02:15:17,155 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=158, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-18 02:15:17,157 WARN [PEWorker-4] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testDisabledTableMove from hbase:meta 2023-07-18 02:15:17,162 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(421): Removing 'Group_testDisabledTableMove' descriptor. 2023-07-18 02:15:17,163 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=158, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-18 02:15:17,163 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(411): Removing 'Group_testDisabledTableMove' from region states. 2023-07-18 02:15:17,163 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,,1689646515681.701a99e0c263b7c9b528d737e988902e.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689646517163"}]},"ts":"9223372036854775807"} 2023-07-18 02:15:17,163 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,aaaaa,1689646515681.779ca4a243943953951087355d31d5bb.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689646517163"}]},"ts":"9223372036854775807"} 2023-07-18 02:15:17,163 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689646515681.aef973101e2f6499975300e7250ac2ee.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689646517163"}]},"ts":"9223372036854775807"} 2023-07-18 02:15:17,164 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689646515681.67a6a461ca0a8fed3574ea2c030cb75e.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689646517163"}]},"ts":"9223372036854775807"} 2023-07-18 02:15:17,164 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,zzzzz,1689646515681.2f28a773589111cb78b99d6b81a6c044.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689646517163"}]},"ts":"9223372036854775807"} 2023-07-18 02:15:17,165 INFO [PEWorker-4] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-18 02:15:17,165 DEBUG [PEWorker-4] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 701a99e0c263b7c9b528d737e988902e, NAME => 'Group_testDisabledTableMove,,1689646515681.701a99e0c263b7c9b528d737e988902e.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => 779ca4a243943953951087355d31d5bb, NAME => 'Group_testDisabledTableMove,aaaaa,1689646515681.779ca4a243943953951087355d31d5bb.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => aef973101e2f6499975300e7250ac2ee, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689646515681.aef973101e2f6499975300e7250ac2ee.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => 67a6a461ca0a8fed3574ea2c030cb75e, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689646515681.67a6a461ca0a8fed3574ea2c030cb75e.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => 2f28a773589111cb78b99d6b81a6c044, NAME => 'Group_testDisabledTableMove,zzzzz,1689646515681.2f28a773589111cb78b99d6b81a6c044.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-18 02:15:17,165 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(415): Marking 'Group_testDisabledTableMove' as deleted. 2023-07-18 02:15:17,166 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689646517165"}]},"ts":"9223372036854775807"} 2023-07-18 02:15:17,167 INFO [PEWorker-4] hbase.MetaTableAccessor(1658): Deleted table Group_testDisabledTableMove state from META 2023-07-18 02:15:17,169 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(130): Finished pid=158, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-18 02:15:17,171 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=158, state=SUCCESS; DeleteTableProcedure table=Group_testDisabledTableMove in 42 msec 2023-07-18 02:15:17,237 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(1230): Checking to see if procedure is done pid=158 2023-07-18 02:15:17,238 INFO [Listener at localhost/38101] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testDisabledTableMove, procId: 158 completed 2023-07-18 02:15:17,241 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:17,241 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:17,242 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 02:15:17,242 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 02:15:17,242 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 02:15:17,243 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:35063, jenkins-hbase4.apache.org:39557] to rsgroup default 2023-07-18 02:15:17,245 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:17,245 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_93749856 2023-07-18 02:15:17,245 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 02:15:17,246 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 02:15:17,247 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testDisabledTableMove_93749856, current retry=0 2023-07-18 02:15:17,247 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,35063,1689646489808, jenkins-hbase4.apache.org,39557,1689646489998] are moved back to Group_testDisabledTableMove_93749856 2023-07-18 02:15:17,247 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testDisabledTableMove_93749856 => default 2023-07-18 02:15:17,247 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 02:15:17,248 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testDisabledTableMove_93749856 2023-07-18 02:15:17,251 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:17,251 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 02:15:17,251 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-18 02:15:17,252 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 02:15:17,253 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 02:15:17,253 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 02:15:17,253 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 02:15:17,254 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 02:15:17,254 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 02:15:17,254 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 02:15:17,257 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:17,257 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 02:15:17,259 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 02:15:17,261 INFO [Listener at localhost/38101] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 02:15:17,262 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 02:15:17,263 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:17,264 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 02:15:17,265 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 02:15:17,266 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 02:15:17,268 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:17,268 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:17,270 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40909] to rsgroup master 2023-07-18 02:15:17,270 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40909 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 02:15:17,270 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] ipc.CallRunner(144): callId: 956 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:39122 deadline: 1689647717270, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40909 is either offline or it does not exist. 2023-07-18 02:15:17,271 WARN [Listener at localhost/38101] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40909 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40909 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 02:15:17,272 INFO [Listener at localhost/38101] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 02:15:17,273 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:17,273 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:17,273 INFO [Listener at localhost/38101] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35063, jenkins-hbase4.apache.org:39557, jenkins-hbase4.apache.org:43645, jenkins-hbase4.apache.org:45077], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 02:15:17,274 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 02:15:17,274 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 02:15:17,291 INFO [Listener at localhost/38101] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testDisabledTableMove Thread=523 (was 522) Potentially hanging thread: hconnection-0x2e79eb29-shared-pool-26 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1294118745_17 at /127.0.0.1:44812 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x422d8bf2-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1021434382_17 at /127.0.0.1:40750 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=822 (was 801) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=420 (was 420), ProcessCount=170 (was 170), AvailableMemoryMB=4654 (was 4664) 2023-07-18 02:15:17,291 WARN [Listener at localhost/38101] hbase.ResourceChecker(130): Thread=523 is superior to 500 2023-07-18 02:15:17,307 INFO [Listener at localhost/38101] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRSGroupListDoesNotContainFailedTableCreation Thread=523, OpenFileDescriptor=822, MaxFileDescriptor=60000, SystemLoadAverage=420, ProcessCount=170, AvailableMemoryMB=4653 2023-07-18 02:15:17,307 WARN [Listener at localhost/38101] hbase.ResourceChecker(130): Thread=523 is superior to 500 2023-07-18 02:15:17,307 INFO [Listener at localhost/38101] rsgroup.TestRSGroupsBase(132): testRSGroupListDoesNotContainFailedTableCreation 2023-07-18 02:15:17,311 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:17,311 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:17,312 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 02:15:17,312 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 02:15:17,312 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 02:15:17,313 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 02:15:17,313 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 02:15:17,313 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 02:15:17,316 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:17,317 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 02:15:17,322 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 02:15:17,325 INFO [Listener at localhost/38101] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 02:15:17,325 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 02:15:17,327 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:17,327 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 02:15:17,329 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 02:15:17,330 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 02:15:17,332 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:17,332 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:17,334 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40909] to rsgroup master 2023-07-18 02:15:17,335 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40909 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 02:15:17,335 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] ipc.CallRunner(144): callId: 984 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:39122 deadline: 1689647717334, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40909 is either offline or it does not exist. 2023-07-18 02:15:17,335 WARN [Listener at localhost/38101] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40909 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40909 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 02:15:17,337 INFO [Listener at localhost/38101] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 02:15:17,337 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:17,338 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:17,338 INFO [Listener at localhost/38101] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35063, jenkins-hbase4.apache.org:39557, jenkins-hbase4.apache.org:43645, jenkins-hbase4.apache.org:45077], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 02:15:17,338 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 02:15:17,339 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40909] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 02:15:17,339 INFO [Listener at localhost/38101] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-18 02:15:17,339 INFO [Listener at localhost/38101] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-18 02:15:17,339 DEBUG [Listener at localhost/38101] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x4b32111a to 127.0.0.1:54439 2023-07-18 02:15:17,339 DEBUG [Listener at localhost/38101] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 02:15:17,342 DEBUG [Listener at localhost/38101] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-18 02:15:17,342 DEBUG [Listener at localhost/38101] util.JVMClusterUtil(257): Found active master hash=1446941524, stopped=false 2023-07-18 02:15:17,342 DEBUG [Listener at localhost/38101] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-18 02:15:17,343 DEBUG [Listener at localhost/38101] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-18 02:15:17,343 INFO [Listener at localhost/38101] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,40909,1689646487536 2023-07-18 02:15:17,346 DEBUG [Listener at localhost/38101-EventThread] zookeeper.ZKWatcher(600): regionserver:39557-0x1017635d76e0003, quorum=127.0.0.1:54439, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-18 02:15:17,346 DEBUG [Listener at localhost/38101-EventThread] zookeeper.ZKWatcher(600): regionserver:43645-0x1017635d76e000b, quorum=127.0.0.1:54439, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-18 02:15:17,346 INFO [Listener at localhost/38101] procedure2.ProcedureExecutor(629): Stopping 2023-07-18 02:15:17,346 DEBUG [Listener at localhost/38101-EventThread] zookeeper.ZKWatcher(600): regionserver:35063-0x1017635d76e0002, quorum=127.0.0.1:54439, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-18 02:15:17,346 DEBUG [Listener at localhost/38101-EventThread] zookeeper.ZKWatcher(600): master:40909-0x1017635d76e0000, quorum=127.0.0.1:54439, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-18 02:15:17,346 DEBUG [Listener at localhost/38101-EventThread] zookeeper.ZKWatcher(600): regionserver:45077-0x1017635d76e0001, quorum=127.0.0.1:54439, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-18 02:15:17,346 DEBUG [Listener at localhost/38101-EventThread] zookeeper.ZKWatcher(600): master:40909-0x1017635d76e0000, quorum=127.0.0.1:54439, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 02:15:17,347 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:43645-0x1017635d76e000b, quorum=127.0.0.1:54439, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 02:15:17,347 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:39557-0x1017635d76e0003, quorum=127.0.0.1:54439, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 02:15:17,347 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:40909-0x1017635d76e0000, quorum=127.0.0.1:54439, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 02:15:17,347 DEBUG [Listener at localhost/38101] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x2e3a8222 to 127.0.0.1:54439 2023-07-18 02:15:17,347 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:35063-0x1017635d76e0002, quorum=127.0.0.1:54439, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 02:15:17,347 DEBUG [Listener at localhost/38101] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 02:15:17,347 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:45077-0x1017635d76e0001, quorum=127.0.0.1:54439, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 02:15:17,348 INFO [Listener at localhost/38101] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,45077,1689646489555' ***** 2023-07-18 02:15:17,348 INFO [Listener at localhost/38101] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-18 02:15:17,348 INFO [Listener at localhost/38101] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,35063,1689646489808' ***** 2023-07-18 02:15:17,348 INFO [RS:0;jenkins-hbase4:45077] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-18 02:15:17,348 INFO [Listener at localhost/38101] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-18 02:15:17,348 INFO [Listener at localhost/38101] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,39557,1689646489998' ***** 2023-07-18 02:15:17,348 INFO [RS:1;jenkins-hbase4:35063] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-18 02:15:17,348 INFO [Listener at localhost/38101] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-18 02:15:17,349 INFO [Listener at localhost/38101] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,43645,1689646493716' ***** 2023-07-18 02:15:17,350 INFO [Listener at localhost/38101] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-18 02:15:17,349 INFO [RS:2;jenkins-hbase4:39557] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-18 02:15:17,350 INFO [RS:3;jenkins-hbase4:43645] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-18 02:15:17,366 INFO [RS:3;jenkins-hbase4:43645] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@e9c768a{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-18 02:15:17,366 INFO [RS:1;jenkins-hbase4:35063] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@67816748{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-18 02:15:17,366 INFO [RS:2;jenkins-hbase4:39557] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@38a30dd7{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-18 02:15:17,366 INFO [RS:0;jenkins-hbase4:45077] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@40f2000c{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-18 02:15:17,371 INFO [RS:0;jenkins-hbase4:45077] server.AbstractConnector(383): Stopped ServerConnector@22a36f37{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 02:15:17,371 INFO [RS:2;jenkins-hbase4:39557] server.AbstractConnector(383): Stopped ServerConnector@16240f3c{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 02:15:17,371 INFO [RS:3;jenkins-hbase4:43645] server.AbstractConnector(383): Stopped ServerConnector@53ac63a1{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 02:15:17,371 INFO [RS:1;jenkins-hbase4:35063] server.AbstractConnector(383): Stopped ServerConnector@1ba8dae2{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 02:15:17,371 INFO [RS:3;jenkins-hbase4:43645] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-18 02:15:17,371 INFO [RS:2;jenkins-hbase4:39557] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-18 02:15:17,371 INFO [RS:0;jenkins-hbase4:45077] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-18 02:15:17,372 INFO [RS:1;jenkins-hbase4:35063] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-18 02:15:17,373 INFO [RS:2;jenkins-hbase4:39557] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7431440f{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-18 02:15:17,373 INFO [RS:0;jenkins-hbase4:45077] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5c80b18{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-18 02:15:17,374 INFO [RS:2;jenkins-hbase4:39557] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@145f3cb8{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d9e1427-d9ef-a78e-d989-6465a7eb0c3a/hadoop.log.dir/,STOPPED} 2023-07-18 02:15:17,375 INFO [RS:0;jenkins-hbase4:45077] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1c6f9d30{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d9e1427-d9ef-a78e-d989-6465a7eb0c3a/hadoop.log.dir/,STOPPED} 2023-07-18 02:15:17,373 INFO [RS:3;jenkins-hbase4:43645] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@70ac5277{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-18 02:15:17,374 INFO [RS:1;jenkins-hbase4:35063] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@46770358{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-18 02:15:17,376 INFO [RS:3;jenkins-hbase4:43645] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2474d7bd{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d9e1427-d9ef-a78e-d989-6465a7eb0c3a/hadoop.log.dir/,STOPPED} 2023-07-18 02:15:17,376 INFO [RS:1;jenkins-hbase4:35063] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@64f476b1{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d9e1427-d9ef-a78e-d989-6465a7eb0c3a/hadoop.log.dir/,STOPPED} 2023-07-18 02:15:17,379 INFO [RS:2;jenkins-hbase4:39557] regionserver.HeapMemoryManager(220): Stopping 2023-07-18 02:15:17,379 INFO [RS:1;jenkins-hbase4:35063] regionserver.HeapMemoryManager(220): Stopping 2023-07-18 02:15:17,379 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-18 02:15:17,380 INFO [RS:2;jenkins-hbase4:39557] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-18 02:15:17,380 INFO [RS:2;jenkins-hbase4:39557] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-18 02:15:17,380 INFO [RS:3;jenkins-hbase4:43645] regionserver.HeapMemoryManager(220): Stopping 2023-07-18 02:15:17,380 INFO [RS:0;jenkins-hbase4:45077] regionserver.HeapMemoryManager(220): Stopping 2023-07-18 02:15:17,380 INFO [RS:2;jenkins-hbase4:39557] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,39557,1689646489998 2023-07-18 02:15:17,380 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-18 02:15:17,380 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-18 02:15:17,380 INFO [RS:1;jenkins-hbase4:35063] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-18 02:15:17,380 DEBUG [RS:2;jenkins-hbase4:39557] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x1e6a0baf to 127.0.0.1:54439 2023-07-18 02:15:17,380 INFO [RS:3;jenkins-hbase4:43645] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-18 02:15:17,380 INFO [RS:1;jenkins-hbase4:35063] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-18 02:15:17,381 INFO [RS:1;jenkins-hbase4:35063] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,35063,1689646489808 2023-07-18 02:15:17,380 INFO [RS:3;jenkins-hbase4:43645] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-18 02:15:17,381 INFO [RS:3;jenkins-hbase4:43645] regionserver.HRegionServer(3305): Received CLOSE for bbf71cfacd6e4740d14aa9af8f240c8d 2023-07-18 02:15:17,381 INFO [RS:3;jenkins-hbase4:43645] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,43645,1689646493716 2023-07-18 02:15:17,381 DEBUG [RS:3;jenkins-hbase4:43645] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x1afe09e6 to 127.0.0.1:54439 2023-07-18 02:15:17,380 INFO [RS:0;jenkins-hbase4:45077] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-18 02:15:17,380 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-18 02:15:17,380 DEBUG [RS:2;jenkins-hbase4:39557] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 02:15:17,382 INFO [RS:0;jenkins-hbase4:45077] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-18 02:15:17,382 DEBUG [RS:3;jenkins-hbase4:43645] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 02:15:17,382 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing bbf71cfacd6e4740d14aa9af8f240c8d, disabling compactions & flushes 2023-07-18 02:15:17,383 INFO [RS:3;jenkins-hbase4:43645] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-18 02:15:17,381 DEBUG [RS:1;jenkins-hbase4:35063] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x08692c10 to 127.0.0.1:54439 2023-07-18 02:15:17,383 DEBUG [RS:3;jenkins-hbase4:43645] regionserver.HRegionServer(1478): Online Regions={bbf71cfacd6e4740d14aa9af8f240c8d=testRename,,1689646510101.bbf71cfacd6e4740d14aa9af8f240c8d.} 2023-07-18 02:15:17,383 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1689646510101.bbf71cfacd6e4740d14aa9af8f240c8d. 2023-07-18 02:15:17,383 INFO [RS:0;jenkins-hbase4:45077] regionserver.HRegionServer(3305): Received CLOSE for eb5efc21960221b704d272f83f5b2dec 2023-07-18 02:15:17,383 INFO [RS:2;jenkins-hbase4:39557] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,39557,1689646489998; all regions closed. 2023-07-18 02:15:17,384 INFO [RS:0;jenkins-hbase4:45077] regionserver.HRegionServer(3305): Received CLOSE for fbc284aeb66f3eaca0bb2d67e73a56a3 2023-07-18 02:15:17,383 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689646510101.bbf71cfacd6e4740d14aa9af8f240c8d. 2023-07-18 02:15:17,383 DEBUG [RS:1;jenkins-hbase4:35063] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 02:15:17,384 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689646510101.bbf71cfacd6e4740d14aa9af8f240c8d. after waiting 0 ms 2023-07-18 02:15:17,384 INFO [RS:0;jenkins-hbase4:45077] regionserver.HRegionServer(3305): Received CLOSE for 7925c60bcfbbace6dabdab5258b7cdde 2023-07-18 02:15:17,384 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing eb5efc21960221b704d272f83f5b2dec, disabling compactions & flushes 2023-07-18 02:15:17,385 DEBUG [RS:3;jenkins-hbase4:43645] regionserver.HRegionServer(1504): Waiting on bbf71cfacd6e4740d14aa9af8f240c8d 2023-07-18 02:15:17,385 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689646510101.bbf71cfacd6e4740d14aa9af8f240c8d. 2023-07-18 02:15:17,385 INFO [RS:1;jenkins-hbase4:35063] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,35063,1689646489808; all regions closed. 2023-07-18 02:15:17,385 INFO [RS:0;jenkins-hbase4:45077] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,45077,1689646489555 2023-07-18 02:15:17,385 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689646511763.eb5efc21960221b704d272f83f5b2dec. 2023-07-18 02:15:17,385 DEBUG [RS:0;jenkins-hbase4:45077] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x54420ef1 to 127.0.0.1:54439 2023-07-18 02:15:17,385 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689646511763.eb5efc21960221b704d272f83f5b2dec. 2023-07-18 02:15:17,385 DEBUG [RS:0;jenkins-hbase4:45077] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 02:15:17,385 INFO [RS:0;jenkins-hbase4:45077] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-18 02:15:17,385 INFO [RS:0;jenkins-hbase4:45077] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-18 02:15:17,385 INFO [RS:0;jenkins-hbase4:45077] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-18 02:15:17,385 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689646511763.eb5efc21960221b704d272f83f5b2dec. after waiting 0 ms 2023-07-18 02:15:17,386 INFO [RS:0;jenkins-hbase4:45077] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-18 02:15:17,386 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689646511763.eb5efc21960221b704d272f83f5b2dec. 2023-07-18 02:15:17,388 INFO [RS:0;jenkins-hbase4:45077] regionserver.HRegionServer(1474): Waiting on 4 regions to close 2023-07-18 02:15:17,388 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-18 02:15:17,388 DEBUG [RS:0;jenkins-hbase4:45077] regionserver.HRegionServer(1478): Online Regions={eb5efc21960221b704d272f83f5b2dec=unmovedTable,,1689646511763.eb5efc21960221b704d272f83f5b2dec., fbc284aeb66f3eaca0bb2d67e73a56a3=hbase:namespace,,1689646492720.fbc284aeb66f3eaca0bb2d67e73a56a3., 7925c60bcfbbace6dabdab5258b7cdde=hbase:rsgroup,,1689646492926.7925c60bcfbbace6dabdab5258b7cdde., 1588230740=hbase:meta,,1.1588230740} 2023-07-18 02:15:17,388 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-18 02:15:17,388 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-18 02:15:17,388 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-18 02:15:17,388 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-18 02:15:17,388 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-18 02:15:17,388 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-18 02:15:17,389 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-18 02:15:17,388 DEBUG [RS:0;jenkins-hbase4:45077] regionserver.HRegionServer(1504): Waiting on 1588230740, 7925c60bcfbbace6dabdab5258b7cdde, eb5efc21960221b704d272f83f5b2dec, fbc284aeb66f3eaca0bb2d67e73a56a3 2023-07-18 02:15:17,389 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=37.48 KB heapSize=61.13 KB 2023-07-18 02:15:17,396 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-18 02:15:17,404 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/testRename/bbf71cfacd6e4740d14aa9af8f240c8d/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-18 02:15:17,404 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/default/unmovedTable/eb5efc21960221b704d272f83f5b2dec/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-18 02:15:17,409 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1689646510101.bbf71cfacd6e4740d14aa9af8f240c8d. 2023-07-18 02:15:17,409 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for bbf71cfacd6e4740d14aa9af8f240c8d: 2023-07-18 02:15:17,409 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed testRename,,1689646510101.bbf71cfacd6e4740d14aa9af8f240c8d. 2023-07-18 02:15:17,409 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689646511763.eb5efc21960221b704d272f83f5b2dec. 2023-07-18 02:15:17,409 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for eb5efc21960221b704d272f83f5b2dec: 2023-07-18 02:15:17,410 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed unmovedTable,,1689646511763.eb5efc21960221b704d272f83f5b2dec. 2023-07-18 02:15:17,410 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing fbc284aeb66f3eaca0bb2d67e73a56a3, disabling compactions & flushes 2023-07-18 02:15:17,410 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689646492720.fbc284aeb66f3eaca0bb2d67e73a56a3. 2023-07-18 02:15:17,410 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689646492720.fbc284aeb66f3eaca0bb2d67e73a56a3. 2023-07-18 02:15:17,410 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689646492720.fbc284aeb66f3eaca0bb2d67e73a56a3. after waiting 0 ms 2023-07-18 02:15:17,410 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689646492720.fbc284aeb66f3eaca0bb2d67e73a56a3. 2023-07-18 02:15:17,419 DEBUG [RS:2;jenkins-hbase4:39557] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/oldWALs 2023-07-18 02:15:17,419 INFO [RS:2;jenkins-hbase4:39557] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C39557%2C1689646489998.meta:.meta(num 1689646492426) 2023-07-18 02:15:17,422 DEBUG [RS:1;jenkins-hbase4:35063] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/oldWALs 2023-07-18 02:15:17,422 INFO [RS:1;jenkins-hbase4:35063] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C35063%2C1689646489808:(num 1689646492256) 2023-07-18 02:15:17,422 DEBUG [RS:1;jenkins-hbase4:35063] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 02:15:17,422 INFO [RS:1;jenkins-hbase4:35063] regionserver.LeaseManager(133): Closed leases 2023-07-18 02:15:17,427 INFO [RS:1;jenkins-hbase4:35063] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-18 02:15:17,427 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/namespace/fbc284aeb66f3eaca0bb2d67e73a56a3/recovered.edits/15.seqid, newMaxSeqId=15, maxSeqId=12 2023-07-18 02:15:17,427 INFO [RS:1;jenkins-hbase4:35063] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-18 02:15:17,427 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-18 02:15:17,427 INFO [RS:1;jenkins-hbase4:35063] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-18 02:15:17,427 INFO [RS:1;jenkins-hbase4:35063] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-18 02:15:17,429 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689646492720.fbc284aeb66f3eaca0bb2d67e73a56a3. 2023-07-18 02:15:17,429 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for fbc284aeb66f3eaca0bb2d67e73a56a3: 2023-07-18 02:15:17,433 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689646492720.fbc284aeb66f3eaca0bb2d67e73a56a3. 2023-07-18 02:15:17,429 INFO [RS:1;jenkins-hbase4:35063] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:35063 2023-07-18 02:15:17,433 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 7925c60bcfbbace6dabdab5258b7cdde, disabling compactions & flushes 2023-07-18 02:15:17,433 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689646492926.7925c60bcfbbace6dabdab5258b7cdde. 2023-07-18 02:15:17,433 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689646492926.7925c60bcfbbace6dabdab5258b7cdde. 2023-07-18 02:15:17,433 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689646492926.7925c60bcfbbace6dabdab5258b7cdde. after waiting 0 ms 2023-07-18 02:15:17,434 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689646492926.7925c60bcfbbace6dabdab5258b7cdde. 2023-07-18 02:15:17,434 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 7925c60bcfbbace6dabdab5258b7cdde 1/1 column families, dataSize=28.45 KB heapSize=46.80 KB 2023-07-18 02:15:17,448 DEBUG [Listener at localhost/38101-EventThread] zookeeper.ZKWatcher(600): regionserver:43645-0x1017635d76e000b, quorum=127.0.0.1:54439, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35063,1689646489808 2023-07-18 02:15:17,448 DEBUG [Listener at localhost/38101-EventThread] zookeeper.ZKWatcher(600): regionserver:43645-0x1017635d76e000b, quorum=127.0.0.1:54439, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 02:15:17,448 DEBUG [Listener at localhost/38101-EventThread] zookeeper.ZKWatcher(600): master:40909-0x1017635d76e0000, quorum=127.0.0.1:54439, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 02:15:17,449 DEBUG [Listener at localhost/38101-EventThread] zookeeper.ZKWatcher(600): regionserver:39557-0x1017635d76e0003, quorum=127.0.0.1:54439, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35063,1689646489808 2023-07-18 02:15:17,449 DEBUG [Listener at localhost/38101-EventThread] zookeeper.ZKWatcher(600): regionserver:39557-0x1017635d76e0003, quorum=127.0.0.1:54439, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 02:15:17,449 DEBUG [Listener at localhost/38101-EventThread] zookeeper.ZKWatcher(600): regionserver:35063-0x1017635d76e0002, quorum=127.0.0.1:54439, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35063,1689646489808 2023-07-18 02:15:17,449 DEBUG [Listener at localhost/38101-EventThread] zookeeper.ZKWatcher(600): regionserver:35063-0x1017635d76e0002, quorum=127.0.0.1:54439, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 02:15:17,450 DEBUG [Listener at localhost/38101-EventThread] zookeeper.ZKWatcher(600): regionserver:45077-0x1017635d76e0001, quorum=127.0.0.1:54439, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35063,1689646489808 2023-07-18 02:15:17,451 DEBUG [Listener at localhost/38101-EventThread] zookeeper.ZKWatcher(600): regionserver:45077-0x1017635d76e0001, quorum=127.0.0.1:54439, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 02:15:17,451 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,35063,1689646489808] 2023-07-18 02:15:17,451 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,35063,1689646489808; numProcessing=1 2023-07-18 02:15:17,453 DEBUG [RS:2;jenkins-hbase4:39557] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/oldWALs 2023-07-18 02:15:17,453 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,35063,1689646489808 already deleted, retry=false 2023-07-18 02:15:17,453 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,35063,1689646489808 expired; onlineServers=3 2023-07-18 02:15:17,453 INFO [RS:2;jenkins-hbase4:39557] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C39557%2C1689646489998:(num 1689646492255) 2023-07-18 02:15:17,453 DEBUG [RS:2;jenkins-hbase4:39557] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 02:15:17,453 INFO [RS:2;jenkins-hbase4:39557] regionserver.LeaseManager(133): Closed leases 2023-07-18 02:15:17,454 INFO [RS:2;jenkins-hbase4:39557] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-18 02:15:17,455 INFO [RS:2;jenkins-hbase4:39557] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-18 02:15:17,455 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-18 02:15:17,455 INFO [RS:2;jenkins-hbase4:39557] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-18 02:15:17,455 INFO [RS:2;jenkins-hbase4:39557] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-18 02:15:17,457 INFO [RS:2;jenkins-hbase4:39557] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:39557 2023-07-18 02:15:17,464 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=34.56 KB at sequenceid=216 (bloomFilter=false), to=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/meta/1588230740/.tmp/info/6f7550aed1f24c4e8f723108e16d39b1 2023-07-18 02:15:17,465 DEBUG [Listener at localhost/38101-EventThread] zookeeper.ZKWatcher(600): regionserver:45077-0x1017635d76e0001, quorum=127.0.0.1:54439, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,39557,1689646489998 2023-07-18 02:15:17,465 DEBUG [Listener at localhost/38101-EventThread] zookeeper.ZKWatcher(600): master:40909-0x1017635d76e0000, quorum=127.0.0.1:54439, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 02:15:17,465 DEBUG [Listener at localhost/38101-EventThread] zookeeper.ZKWatcher(600): regionserver:43645-0x1017635d76e000b, quorum=127.0.0.1:54439, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,39557,1689646489998 2023-07-18 02:15:17,465 DEBUG [Listener at localhost/38101-EventThread] zookeeper.ZKWatcher(600): regionserver:39557-0x1017635d76e0003, quorum=127.0.0.1:54439, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,39557,1689646489998 2023-07-18 02:15:17,466 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,39557,1689646489998] 2023-07-18 02:15:17,466 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,39557,1689646489998; numProcessing=2 2023-07-18 02:15:17,467 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,39557,1689646489998 already deleted, retry=false 2023-07-18 02:15:17,467 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,39557,1689646489998 expired; onlineServers=2 2023-07-18 02:15:17,471 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 6f7550aed1f24c4e8f723108e16d39b1 2023-07-18 02:15:17,483 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=28.45 KB at sequenceid=95 (bloomFilter=true), to=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/rsgroup/7925c60bcfbbace6dabdab5258b7cdde/.tmp/m/6a8446ec3e2c4ce395cbcfcf34fe4220 2023-07-18 02:15:17,489 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 6a8446ec3e2c4ce395cbcfcf34fe4220 2023-07-18 02:15:17,490 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=868 B at sequenceid=216 (bloomFilter=false), to=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/meta/1588230740/.tmp/rep_barrier/9cf61827802041ae8f3e6c092e2b07eb 2023-07-18 02:15:17,490 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/rsgroup/7925c60bcfbbace6dabdab5258b7cdde/.tmp/m/6a8446ec3e2c4ce395cbcfcf34fe4220 as hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/rsgroup/7925c60bcfbbace6dabdab5258b7cdde/m/6a8446ec3e2c4ce395cbcfcf34fe4220 2023-07-18 02:15:17,497 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 6a8446ec3e2c4ce395cbcfcf34fe4220 2023-07-18 02:15:17,497 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/rsgroup/7925c60bcfbbace6dabdab5258b7cdde/m/6a8446ec3e2c4ce395cbcfcf34fe4220, entries=28, sequenceid=95, filesize=6.1 K 2023-07-18 02:15:17,497 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 9cf61827802041ae8f3e6c092e2b07eb 2023-07-18 02:15:17,498 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~28.45 KB/29132, heapSize ~46.78 KB/47904, currentSize=0 B/0 for 7925c60bcfbbace6dabdab5258b7cdde in 64ms, sequenceid=95, compaction requested=false 2023-07-18 02:15:17,505 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/rsgroup/7925c60bcfbbace6dabdab5258b7cdde/recovered.edits/98.seqid, newMaxSeqId=98, maxSeqId=1 2023-07-18 02:15:17,506 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-18 02:15:17,506 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689646492926.7925c60bcfbbace6dabdab5258b7cdde. 2023-07-18 02:15:17,506 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 7925c60bcfbbace6dabdab5258b7cdde: 2023-07-18 02:15:17,507 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689646492926.7925c60bcfbbace6dabdab5258b7cdde. 2023-07-18 02:15:17,515 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.07 KB at sequenceid=216 (bloomFilter=false), to=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/meta/1588230740/.tmp/table/c4206003def040b0b9c7d5fcd367d358 2023-07-18 02:15:17,520 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for c4206003def040b0b9c7d5fcd367d358 2023-07-18 02:15:17,520 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/meta/1588230740/.tmp/info/6f7550aed1f24c4e8f723108e16d39b1 as hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/meta/1588230740/info/6f7550aed1f24c4e8f723108e16d39b1 2023-07-18 02:15:17,526 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 6f7550aed1f24c4e8f723108e16d39b1 2023-07-18 02:15:17,526 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/meta/1588230740/info/6f7550aed1f24c4e8f723108e16d39b1, entries=62, sequenceid=216, filesize=11.9 K 2023-07-18 02:15:17,526 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/meta/1588230740/.tmp/rep_barrier/9cf61827802041ae8f3e6c092e2b07eb as hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/meta/1588230740/rep_barrier/9cf61827802041ae8f3e6c092e2b07eb 2023-07-18 02:15:17,532 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 9cf61827802041ae8f3e6c092e2b07eb 2023-07-18 02:15:17,532 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/meta/1588230740/rep_barrier/9cf61827802041ae8f3e6c092e2b07eb, entries=8, sequenceid=216, filesize=5.8 K 2023-07-18 02:15:17,533 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/meta/1588230740/.tmp/table/c4206003def040b0b9c7d5fcd367d358 as hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/meta/1588230740/table/c4206003def040b0b9c7d5fcd367d358 2023-07-18 02:15:17,539 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for c4206003def040b0b9c7d5fcd367d358 2023-07-18 02:15:17,539 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/meta/1588230740/table/c4206003def040b0b9c7d5fcd367d358, entries=16, sequenceid=216, filesize=6.0 K 2023-07-18 02:15:17,540 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~37.48 KB/38382, heapSize ~61.08 KB/62544, currentSize=0 B/0 for 1588230740 in 151ms, sequenceid=216, compaction requested=true 2023-07-18 02:15:17,540 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-18 02:15:17,556 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/data/hbase/meta/1588230740/recovered.edits/219.seqid, newMaxSeqId=219, maxSeqId=104 2023-07-18 02:15:17,558 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-18 02:15:17,558 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-18 02:15:17,559 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-18 02:15:17,559 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-18 02:15:17,585 INFO [RS:3;jenkins-hbase4:43645] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,43645,1689646493716; all regions closed. 2023-07-18 02:15:17,589 INFO [RS:0;jenkins-hbase4:45077] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,45077,1689646489555; all regions closed. 2023-07-18 02:15:17,593 DEBUG [RS:3;jenkins-hbase4:43645] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/oldWALs 2023-07-18 02:15:17,593 INFO [RS:3;jenkins-hbase4:43645] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C43645%2C1689646493716.meta:.meta(num 1689646494931) 2023-07-18 02:15:17,597 DEBUG [RS:0;jenkins-hbase4:45077] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/oldWALs 2023-07-18 02:15:17,597 INFO [RS:0;jenkins-hbase4:45077] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C45077%2C1689646489555.meta:.meta(num 1689646501575) 2023-07-18 02:15:17,602 DEBUG [RS:3;jenkins-hbase4:43645] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/oldWALs 2023-07-18 02:15:17,602 INFO [RS:3;jenkins-hbase4:43645] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C43645%2C1689646493716:(num 1689646494178) 2023-07-18 02:15:17,602 DEBUG [RS:3;jenkins-hbase4:43645] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 02:15:17,602 INFO [RS:3;jenkins-hbase4:43645] regionserver.LeaseManager(133): Closed leases 2023-07-18 02:15:17,603 INFO [RS:3;jenkins-hbase4:43645] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-18 02:15:17,603 INFO [RS:3;jenkins-hbase4:43645] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-18 02:15:17,603 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-18 02:15:17,603 INFO [RS:3;jenkins-hbase4:43645] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-18 02:15:17,603 INFO [RS:3;jenkins-hbase4:43645] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-18 02:15:17,604 INFO [RS:3;jenkins-hbase4:43645] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:43645 2023-07-18 02:15:17,605 DEBUG [RS:0;jenkins-hbase4:45077] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/oldWALs 2023-07-18 02:15:17,605 INFO [RS:0;jenkins-hbase4:45077] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C45077%2C1689646489555:(num 1689646492256) 2023-07-18 02:15:17,605 DEBUG [RS:0;jenkins-hbase4:45077] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 02:15:17,605 INFO [RS:0;jenkins-hbase4:45077] regionserver.LeaseManager(133): Closed leases 2023-07-18 02:15:17,605 INFO [RS:0;jenkins-hbase4:45077] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-18 02:15:17,605 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-18 02:15:17,606 INFO [RS:0;jenkins-hbase4:45077] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:45077 2023-07-18 02:15:17,607 DEBUG [Listener at localhost/38101-EventThread] zookeeper.ZKWatcher(600): master:40909-0x1017635d76e0000, quorum=127.0.0.1:54439, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 02:15:17,607 DEBUG [Listener at localhost/38101-EventThread] zookeeper.ZKWatcher(600): regionserver:43645-0x1017635d76e000b, quorum=127.0.0.1:54439, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,43645,1689646493716 2023-07-18 02:15:17,607 DEBUG [Listener at localhost/38101-EventThread] zookeeper.ZKWatcher(600): regionserver:45077-0x1017635d76e0001, quorum=127.0.0.1:54439, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,43645,1689646493716 2023-07-18 02:15:17,608 DEBUG [Listener at localhost/38101-EventThread] zookeeper.ZKWatcher(600): regionserver:45077-0x1017635d76e0001, quorum=127.0.0.1:54439, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,45077,1689646489555 2023-07-18 02:15:17,608 DEBUG [Listener at localhost/38101-EventThread] zookeeper.ZKWatcher(600): regionserver:43645-0x1017635d76e000b, quorum=127.0.0.1:54439, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,45077,1689646489555 2023-07-18 02:15:17,608 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,43645,1689646493716] 2023-07-18 02:15:17,608 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,43645,1689646493716; numProcessing=3 2023-07-18 02:15:17,609 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,43645,1689646493716 already deleted, retry=false 2023-07-18 02:15:17,609 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,43645,1689646493716 expired; onlineServers=1 2023-07-18 02:15:17,610 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,45077,1689646489555] 2023-07-18 02:15:17,610 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,45077,1689646489555; numProcessing=4 2023-07-18 02:15:17,612 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,45077,1689646489555 already deleted, retry=false 2023-07-18 02:15:17,612 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,45077,1689646489555 expired; onlineServers=0 2023-07-18 02:15:17,612 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,40909,1689646487536' ***** 2023-07-18 02:15:17,612 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-18 02:15:17,612 DEBUG [M:0;jenkins-hbase4:40909] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1703996f, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-18 02:15:17,612 INFO [M:0;jenkins-hbase4:40909] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-18 02:15:17,614 DEBUG [Listener at localhost/38101-EventThread] zookeeper.ZKWatcher(600): master:40909-0x1017635d76e0000, quorum=127.0.0.1:54439, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-18 02:15:17,614 DEBUG [Listener at localhost/38101-EventThread] zookeeper.ZKWatcher(600): master:40909-0x1017635d76e0000, quorum=127.0.0.1:54439, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 02:15:17,615 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:40909-0x1017635d76e0000, quorum=127.0.0.1:54439, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-18 02:15:17,615 INFO [M:0;jenkins-hbase4:40909] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@7f562033{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-18 02:15:17,615 INFO [M:0;jenkins-hbase4:40909] server.AbstractConnector(383): Stopped ServerConnector@1f3e883d{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 02:15:17,615 INFO [M:0;jenkins-hbase4:40909] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-18 02:15:17,616 INFO [M:0;jenkins-hbase4:40909] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1477abdb{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-18 02:15:17,616 INFO [M:0;jenkins-hbase4:40909] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5c595dcd{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d9e1427-d9ef-a78e-d989-6465a7eb0c3a/hadoop.log.dir/,STOPPED} 2023-07-18 02:15:17,616 INFO [M:0;jenkins-hbase4:40909] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,40909,1689646487536 2023-07-18 02:15:17,617 INFO [M:0;jenkins-hbase4:40909] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,40909,1689646487536; all regions closed. 2023-07-18 02:15:17,617 DEBUG [M:0;jenkins-hbase4:40909] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 02:15:17,617 INFO [M:0;jenkins-hbase4:40909] master.HMaster(1491): Stopping master jetty server 2023-07-18 02:15:17,617 INFO [M:0;jenkins-hbase4:40909] server.AbstractConnector(383): Stopped ServerConnector@2938e868{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 02:15:17,618 DEBUG [M:0;jenkins-hbase4:40909] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-18 02:15:17,618 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-18 02:15:17,618 DEBUG [M:0;jenkins-hbase4:40909] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-18 02:15:17,618 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689646491789] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689646491789,5,FailOnTimeoutGroup] 2023-07-18 02:15:17,618 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689646491788] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689646491788,5,FailOnTimeoutGroup] 2023-07-18 02:15:17,618 INFO [M:0;jenkins-hbase4:40909] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-18 02:15:17,618 INFO [M:0;jenkins-hbase4:40909] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-18 02:15:17,618 INFO [M:0;jenkins-hbase4:40909] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-07-18 02:15:17,618 DEBUG [M:0;jenkins-hbase4:40909] master.HMaster(1512): Stopping service threads 2023-07-18 02:15:17,618 INFO [M:0;jenkins-hbase4:40909] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-18 02:15:17,619 ERROR [M:0;jenkins-hbase4:40909] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] Thread[HFileArchiver-1,5,PEWorkerGroup] Thread[HFileArchiver-2,5,PEWorkerGroup] Thread[HFileArchiver-3,5,PEWorkerGroup] Thread[HFileArchiver-4,5,PEWorkerGroup] Thread[HFileArchiver-5,5,PEWorkerGroup] Thread[HFileArchiver-6,5,PEWorkerGroup] Thread[HFileArchiver-7,5,PEWorkerGroup] Thread[HFileArchiver-8,5,PEWorkerGroup] 2023-07-18 02:15:17,619 INFO [M:0;jenkins-hbase4:40909] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-18 02:15:17,619 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-18 02:15:17,620 DEBUG [M:0;jenkins-hbase4:40909] zookeeper.ZKUtil(398): master:40909-0x1017635d76e0000, quorum=127.0.0.1:54439, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-18 02:15:17,620 WARN [M:0;jenkins-hbase4:40909] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-18 02:15:17,620 INFO [M:0;jenkins-hbase4:40909] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-18 02:15:17,620 INFO [M:0;jenkins-hbase4:40909] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-18 02:15:17,620 DEBUG [M:0;jenkins-hbase4:40909] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-18 02:15:17,620 INFO [M:0;jenkins-hbase4:40909] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 02:15:17,620 DEBUG [M:0;jenkins-hbase4:40909] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 02:15:17,620 DEBUG [M:0;jenkins-hbase4:40909] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-18 02:15:17,620 DEBUG [M:0;jenkins-hbase4:40909] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 02:15:17,620 INFO [M:0;jenkins-hbase4:40909] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=528.70 KB heapSize=632.87 KB 2023-07-18 02:15:17,636 INFO [M:0;jenkins-hbase4:40909] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=528.70 KB at sequenceid=1176 (bloomFilter=true), to=hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/6a4d9558b4f4490da9d4164af6f15d3a 2023-07-18 02:15:17,642 DEBUG [M:0;jenkins-hbase4:40909] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/6a4d9558b4f4490da9d4164af6f15d3a as hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/6a4d9558b4f4490da9d4164af6f15d3a 2023-07-18 02:15:17,647 INFO [M:0;jenkins-hbase4:40909] regionserver.HStore(1080): Added hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/6a4d9558b4f4490da9d4164af6f15d3a, entries=157, sequenceid=1176, filesize=27.6 K 2023-07-18 02:15:17,648 INFO [M:0;jenkins-hbase4:40909] regionserver.HRegion(2948): Finished flush of dataSize ~528.70 KB/541392, heapSize ~632.85 KB/648040, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 28ms, sequenceid=1176, compaction requested=false 2023-07-18 02:15:17,651 INFO [M:0;jenkins-hbase4:40909] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 02:15:17,651 DEBUG [M:0;jenkins-hbase4:40909] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-18 02:15:17,655 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-18 02:15:17,655 INFO [M:0;jenkins-hbase4:40909] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-18 02:15:17,656 INFO [M:0;jenkins-hbase4:40909] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:40909 2023-07-18 02:15:17,657 DEBUG [M:0;jenkins-hbase4:40909] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,40909,1689646487536 already deleted, retry=false 2023-07-18 02:15:17,719 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-18 02:15:17,719 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-18 02:15:17,719 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-18 02:15:18,046 DEBUG [Listener at localhost/38101-EventThread] zookeeper.ZKWatcher(600): master:40909-0x1017635d76e0000, quorum=127.0.0.1:54439, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 02:15:18,046 INFO [M:0;jenkins-hbase4:40909] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,40909,1689646487536; zookeeper connection closed. 2023-07-18 02:15:18,047 DEBUG [Listener at localhost/38101-EventThread] zookeeper.ZKWatcher(600): master:40909-0x1017635d76e0000, quorum=127.0.0.1:54439, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 02:15:18,147 DEBUG [Listener at localhost/38101-EventThread] zookeeper.ZKWatcher(600): regionserver:45077-0x1017635d76e0001, quorum=127.0.0.1:54439, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 02:15:18,147 INFO [RS:0;jenkins-hbase4:45077] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,45077,1689646489555; zookeeper connection closed. 2023-07-18 02:15:18,147 DEBUG [Listener at localhost/38101-EventThread] zookeeper.ZKWatcher(600): regionserver:45077-0x1017635d76e0001, quorum=127.0.0.1:54439, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 02:15:18,147 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@399e4dd9] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@399e4dd9 2023-07-18 02:15:18,247 DEBUG [Listener at localhost/38101-EventThread] zookeeper.ZKWatcher(600): regionserver:43645-0x1017635d76e000b, quorum=127.0.0.1:54439, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 02:15:18,247 INFO [RS:3;jenkins-hbase4:43645] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,43645,1689646493716; zookeeper connection closed. 2023-07-18 02:15:18,247 DEBUG [Listener at localhost/38101-EventThread] zookeeper.ZKWatcher(600): regionserver:43645-0x1017635d76e000b, quorum=127.0.0.1:54439, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 02:15:18,247 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@3854ec0d] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@3854ec0d 2023-07-18 02:15:18,347 DEBUG [Listener at localhost/38101-EventThread] zookeeper.ZKWatcher(600): regionserver:39557-0x1017635d76e0003, quorum=127.0.0.1:54439, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 02:15:18,347 INFO [RS:2;jenkins-hbase4:39557] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,39557,1689646489998; zookeeper connection closed. 2023-07-18 02:15:18,347 DEBUG [Listener at localhost/38101-EventThread] zookeeper.ZKWatcher(600): regionserver:39557-0x1017635d76e0003, quorum=127.0.0.1:54439, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 02:15:18,348 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@3fdb1ad2] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@3fdb1ad2 2023-07-18 02:15:18,448 DEBUG [Listener at localhost/38101-EventThread] zookeeper.ZKWatcher(600): regionserver:35063-0x1017635d76e0002, quorum=127.0.0.1:54439, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 02:15:18,448 INFO [RS:1;jenkins-hbase4:35063] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,35063,1689646489808; zookeeper connection closed. 2023-07-18 02:15:18,448 DEBUG [Listener at localhost/38101-EventThread] zookeeper.ZKWatcher(600): regionserver:35063-0x1017635d76e0002, quorum=127.0.0.1:54439, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 02:15:18,448 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@3a815752] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@3a815752 2023-07-18 02:15:18,448 INFO [Listener at localhost/38101] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 4 regionserver(s) complete 2023-07-18 02:15:18,449 WARN [Listener at localhost/38101] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-18 02:15:18,453 INFO [Listener at localhost/38101] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-18 02:15:18,556 WARN [BP-566210079-172.31.14.131-1689646483854 heartbeating to localhost/127.0.0.1:45101] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-18 02:15:18,556 WARN [BP-566210079-172.31.14.131-1689646483854 heartbeating to localhost/127.0.0.1:45101] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-566210079-172.31.14.131-1689646483854 (Datanode Uuid 9aecdbd1-b572-4706-a4d4-21916359a3ed) service to localhost/127.0.0.1:45101 2023-07-18 02:15:18,558 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d9e1427-d9ef-a78e-d989-6465a7eb0c3a/cluster_114a01d3-d950-74e3-9098-0eab13676d5a/dfs/data/data5/current/BP-566210079-172.31.14.131-1689646483854] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 02:15:18,558 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d9e1427-d9ef-a78e-d989-6465a7eb0c3a/cluster_114a01d3-d950-74e3-9098-0eab13676d5a/dfs/data/data6/current/BP-566210079-172.31.14.131-1689646483854] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 02:15:18,560 WARN [Listener at localhost/38101] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-18 02:15:18,563 INFO [Listener at localhost/38101] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-18 02:15:18,665 WARN [BP-566210079-172.31.14.131-1689646483854 heartbeating to localhost/127.0.0.1:45101] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-18 02:15:18,665 WARN [BP-566210079-172.31.14.131-1689646483854 heartbeating to localhost/127.0.0.1:45101] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-566210079-172.31.14.131-1689646483854 (Datanode Uuid 23639624-f764-411b-b155-4a61e0a33cb4) service to localhost/127.0.0.1:45101 2023-07-18 02:15:18,666 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d9e1427-d9ef-a78e-d989-6465a7eb0c3a/cluster_114a01d3-d950-74e3-9098-0eab13676d5a/dfs/data/data3/current/BP-566210079-172.31.14.131-1689646483854] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 02:15:18,666 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d9e1427-d9ef-a78e-d989-6465a7eb0c3a/cluster_114a01d3-d950-74e3-9098-0eab13676d5a/dfs/data/data4/current/BP-566210079-172.31.14.131-1689646483854] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 02:15:18,667 WARN [Listener at localhost/38101] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-18 02:15:18,669 INFO [Listener at localhost/38101] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-18 02:15:18,772 WARN [BP-566210079-172.31.14.131-1689646483854 heartbeating to localhost/127.0.0.1:45101] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-18 02:15:18,772 WARN [BP-566210079-172.31.14.131-1689646483854 heartbeating to localhost/127.0.0.1:45101] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-566210079-172.31.14.131-1689646483854 (Datanode Uuid fcec732a-fdff-4880-8d24-29c30e97cc1b) service to localhost/127.0.0.1:45101 2023-07-18 02:15:18,773 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d9e1427-d9ef-a78e-d989-6465a7eb0c3a/cluster_114a01d3-d950-74e3-9098-0eab13676d5a/dfs/data/data1/current/BP-566210079-172.31.14.131-1689646483854] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 02:15:18,774 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d9e1427-d9ef-a78e-d989-6465a7eb0c3a/cluster_114a01d3-d950-74e3-9098-0eab13676d5a/dfs/data/data2/current/BP-566210079-172.31.14.131-1689646483854] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 02:15:18,800 INFO [Listener at localhost/38101] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-18 02:15:18,920 INFO [Listener at localhost/38101] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-18 02:15:18,970 INFO [Listener at localhost/38101] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-07-18 02:15:18,971 INFO [Listener at localhost/38101] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-18 02:15:18,971 INFO [Listener at localhost/38101] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d9e1427-d9ef-a78e-d989-6465a7eb0c3a/hadoop.log.dir so I do NOT create it in target/test-data/dcf32670-6396-6fa3-250d-b264fc0f6dfa 2023-07-18 02:15:18,971 INFO [Listener at localhost/38101] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7d9e1427-d9ef-a78e-d989-6465a7eb0c3a/hadoop.tmp.dir so I do NOT create it in target/test-data/dcf32670-6396-6fa3-250d-b264fc0f6dfa 2023-07-18 02:15:18,971 INFO [Listener at localhost/38101] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/dcf32670-6396-6fa3-250d-b264fc0f6dfa/cluster_9a8fabb4-8f95-7b44-10e3-85eaa675d67d, deleteOnExit=true 2023-07-18 02:15:18,971 INFO [Listener at localhost/38101] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-18 02:15:18,971 INFO [Listener at localhost/38101] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/dcf32670-6396-6fa3-250d-b264fc0f6dfa/test.cache.data in system properties and HBase conf 2023-07-18 02:15:18,971 INFO [Listener at localhost/38101] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/dcf32670-6396-6fa3-250d-b264fc0f6dfa/hadoop.tmp.dir in system properties and HBase conf 2023-07-18 02:15:18,971 INFO [Listener at localhost/38101] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/dcf32670-6396-6fa3-250d-b264fc0f6dfa/hadoop.log.dir in system properties and HBase conf 2023-07-18 02:15:18,971 INFO [Listener at localhost/38101] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/dcf32670-6396-6fa3-250d-b264fc0f6dfa/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-18 02:15:18,971 INFO [Listener at localhost/38101] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/dcf32670-6396-6fa3-250d-b264fc0f6dfa/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-18 02:15:18,972 INFO [Listener at localhost/38101] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-18 02:15:18,972 DEBUG [Listener at localhost/38101] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-18 02:15:18,972 INFO [Listener at localhost/38101] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/dcf32670-6396-6fa3-250d-b264fc0f6dfa/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-18 02:15:18,972 INFO [Listener at localhost/38101] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/dcf32670-6396-6fa3-250d-b264fc0f6dfa/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-18 02:15:18,972 INFO [Listener at localhost/38101] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/dcf32670-6396-6fa3-250d-b264fc0f6dfa/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-18 02:15:18,972 INFO [Listener at localhost/38101] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/dcf32670-6396-6fa3-250d-b264fc0f6dfa/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-18 02:15:18,972 INFO [Listener at localhost/38101] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/dcf32670-6396-6fa3-250d-b264fc0f6dfa/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-18 02:15:18,972 INFO [Listener at localhost/38101] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/dcf32670-6396-6fa3-250d-b264fc0f6dfa/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-18 02:15:18,973 INFO [Listener at localhost/38101] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/dcf32670-6396-6fa3-250d-b264fc0f6dfa/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-18 02:15:18,973 INFO [Listener at localhost/38101] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/dcf32670-6396-6fa3-250d-b264fc0f6dfa/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-18 02:15:18,973 INFO [Listener at localhost/38101] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/dcf32670-6396-6fa3-250d-b264fc0f6dfa/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-18 02:15:18,973 INFO [Listener at localhost/38101] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/dcf32670-6396-6fa3-250d-b264fc0f6dfa/nfs.dump.dir in system properties and HBase conf 2023-07-18 02:15:18,973 INFO [Listener at localhost/38101] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/dcf32670-6396-6fa3-250d-b264fc0f6dfa/java.io.tmpdir in system properties and HBase conf 2023-07-18 02:15:18,973 INFO [Listener at localhost/38101] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/dcf32670-6396-6fa3-250d-b264fc0f6dfa/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-18 02:15:18,973 INFO [Listener at localhost/38101] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/dcf32670-6396-6fa3-250d-b264fc0f6dfa/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-18 02:15:18,973 INFO [Listener at localhost/38101] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/dcf32670-6396-6fa3-250d-b264fc0f6dfa/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-18 02:15:18,977 WARN [Listener at localhost/38101] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-18 02:15:18,978 WARN [Listener at localhost/38101] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-18 02:15:19,016 DEBUG [Listener at localhost/38101-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient-0x1017635d76e000a, quorum=127.0.0.1:54439, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null 2023-07-18 02:15:19,016 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(630): VerifyingRSGroupAdminClient-0x1017635d76e000a, quorum=127.0.0.1:54439, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring 2023-07-18 02:15:19,016 WARN [Listener at localhost/38101] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties 2023-07-18 02:15:19,063 WARN [Listener at localhost/38101] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-18 02:15:19,065 INFO [Listener at localhost/38101] log.Slf4jLog(67): jetty-6.1.26 2023-07-18 02:15:19,069 INFO [Listener at localhost/38101] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/dcf32670-6396-6fa3-250d-b264fc0f6dfa/java.io.tmpdir/Jetty_localhost_35911_hdfs____.rflf4v/webapp 2023-07-18 02:15:19,162 INFO [Listener at localhost/38101] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:35911 2023-07-18 02:15:19,171 WARN [Listener at localhost/38101] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-18 02:15:19,171 WARN [Listener at localhost/38101] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-18 02:15:19,214 WARN [Listener at localhost/45369] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-18 02:15:19,232 WARN [Listener at localhost/45369] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-18 02:15:19,234 WARN [Listener at localhost/45369] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-18 02:15:19,235 INFO [Listener at localhost/45369] log.Slf4jLog(67): jetty-6.1.26 2023-07-18 02:15:19,240 INFO [Listener at localhost/45369] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/dcf32670-6396-6fa3-250d-b264fc0f6dfa/java.io.tmpdir/Jetty_localhost_40047_datanode____grczry/webapp 2023-07-18 02:15:19,336 INFO [Listener at localhost/45369] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:40047 2023-07-18 02:15:19,345 WARN [Listener at localhost/37517] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-18 02:15:19,364 WARN [Listener at localhost/37517] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-18 02:15:19,366 WARN [Listener at localhost/37517] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-18 02:15:19,368 INFO [Listener at localhost/37517] log.Slf4jLog(67): jetty-6.1.26 2023-07-18 02:15:19,371 INFO [Listener at localhost/37517] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/dcf32670-6396-6fa3-250d-b264fc0f6dfa/java.io.tmpdir/Jetty_localhost_35097_datanode____.im11t9/webapp 2023-07-18 02:15:19,475 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x3584629022a764b3: Processing first storage report for DS-2e88d1cf-ff15-4411-83b7-f35d82e686b5 from datanode 3d685646-bffb-4f36-8628-0960e2d5d90f 2023-07-18 02:15:19,475 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x3584629022a764b3: from storage DS-2e88d1cf-ff15-4411-83b7-f35d82e686b5 node DatanodeRegistration(127.0.0.1:44711, datanodeUuid=3d685646-bffb-4f36-8628-0960e2d5d90f, infoPort=43177, infoSecurePort=0, ipcPort=37517, storageInfo=lv=-57;cid=testClusterID;nsid=495322761;c=1689646518980), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-18 02:15:19,476 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x3584629022a764b3: Processing first storage report for DS-aec24283-7a40-4b67-8dd2-14f3b8289a54 from datanode 3d685646-bffb-4f36-8628-0960e2d5d90f 2023-07-18 02:15:19,476 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x3584629022a764b3: from storage DS-aec24283-7a40-4b67-8dd2-14f3b8289a54 node DatanodeRegistration(127.0.0.1:44711, datanodeUuid=3d685646-bffb-4f36-8628-0960e2d5d90f, infoPort=43177, infoSecurePort=0, ipcPort=37517, storageInfo=lv=-57;cid=testClusterID;nsid=495322761;c=1689646518980), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-18 02:15:19,496 INFO [Listener at localhost/37517] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:35097 2023-07-18 02:15:19,504 WARN [Listener at localhost/37625] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-18 02:15:19,529 WARN [Listener at localhost/37625] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-18 02:15:19,532 WARN [Listener at localhost/37625] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-18 02:15:19,534 INFO [Listener at localhost/37625] log.Slf4jLog(67): jetty-6.1.26 2023-07-18 02:15:19,539 INFO [Listener at localhost/37625] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/dcf32670-6396-6fa3-250d-b264fc0f6dfa/java.io.tmpdir/Jetty_localhost_40667_datanode____7khjaa/webapp 2023-07-18 02:15:19,630 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x6c375e86e031787: Processing first storage report for DS-731b220f-5a89-40dd-8367-668303e01b62 from datanode b0b3ac22-6328-42b5-8d35-0e07147bf67a 2023-07-18 02:15:19,630 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x6c375e86e031787: from storage DS-731b220f-5a89-40dd-8367-668303e01b62 node DatanodeRegistration(127.0.0.1:34755, datanodeUuid=b0b3ac22-6328-42b5-8d35-0e07147bf67a, infoPort=33895, infoSecurePort=0, ipcPort=37625, storageInfo=lv=-57;cid=testClusterID;nsid=495322761;c=1689646518980), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-18 02:15:19,630 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x6c375e86e031787: Processing first storage report for DS-36ba2a0b-3067-44a3-9ae6-068bc7fbd4b0 from datanode b0b3ac22-6328-42b5-8d35-0e07147bf67a 2023-07-18 02:15:19,630 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x6c375e86e031787: from storage DS-36ba2a0b-3067-44a3-9ae6-068bc7fbd4b0 node DatanodeRegistration(127.0.0.1:34755, datanodeUuid=b0b3ac22-6328-42b5-8d35-0e07147bf67a, infoPort=33895, infoSecurePort=0, ipcPort=37625, storageInfo=lv=-57;cid=testClusterID;nsid=495322761;c=1689646518980), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-18 02:15:19,673 INFO [Listener at localhost/37625] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:40667 2023-07-18 02:15:19,702 WARN [Listener at localhost/42081] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-18 02:15:19,822 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x86d39b7473ae2b1e: Processing first storage report for DS-13383ffd-37a5-4adb-9bed-3db45d050a8d from datanode bfce7ef3-d642-4887-bccd-e2277610874f 2023-07-18 02:15:19,822 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x86d39b7473ae2b1e: from storage DS-13383ffd-37a5-4adb-9bed-3db45d050a8d node DatanodeRegistration(127.0.0.1:44607, datanodeUuid=bfce7ef3-d642-4887-bccd-e2277610874f, infoPort=46345, infoSecurePort=0, ipcPort=42081, storageInfo=lv=-57;cid=testClusterID;nsid=495322761;c=1689646518980), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-18 02:15:19,822 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x86d39b7473ae2b1e: Processing first storage report for DS-77457738-b3f1-4372-b278-bec42c1bec74 from datanode bfce7ef3-d642-4887-bccd-e2277610874f 2023-07-18 02:15:19,822 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x86d39b7473ae2b1e: from storage DS-77457738-b3f1-4372-b278-bec42c1bec74 node DatanodeRegistration(127.0.0.1:44607, datanodeUuid=bfce7ef3-d642-4887-bccd-e2277610874f, infoPort=46345, infoSecurePort=0, ipcPort=42081, storageInfo=lv=-57;cid=testClusterID;nsid=495322761;c=1689646518980), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-18 02:15:19,844 DEBUG [Listener at localhost/42081] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/dcf32670-6396-6fa3-250d-b264fc0f6dfa 2023-07-18 02:15:19,847 INFO [Listener at localhost/42081] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/dcf32670-6396-6fa3-250d-b264fc0f6dfa/cluster_9a8fabb4-8f95-7b44-10e3-85eaa675d67d/zookeeper_0, clientPort=53987, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/dcf32670-6396-6fa3-250d-b264fc0f6dfa/cluster_9a8fabb4-8f95-7b44-10e3-85eaa675d67d/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/dcf32670-6396-6fa3-250d-b264fc0f6dfa/cluster_9a8fabb4-8f95-7b44-10e3-85eaa675d67d/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-18 02:15:19,849 INFO [Listener at localhost/42081] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=53987 2023-07-18 02:15:19,849 INFO [Listener at localhost/42081] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 02:15:19,850 INFO [Listener at localhost/42081] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 02:15:19,891 INFO [Listener at localhost/42081] util.FSUtils(471): Created version file at hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0 with version=8 2023-07-18 02:15:19,891 INFO [Listener at localhost/42081] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/hbase-staging 2023-07-18 02:15:19,893 DEBUG [Listener at localhost/42081] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-18 02:15:19,893 DEBUG [Listener at localhost/42081] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-18 02:15:19,893 DEBUG [Listener at localhost/42081] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-18 02:15:19,893 DEBUG [Listener at localhost/42081] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-18 02:15:19,894 INFO [Listener at localhost/42081] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-18 02:15:19,895 INFO [Listener at localhost/42081] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 02:15:19,895 INFO [Listener at localhost/42081] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-18 02:15:19,895 INFO [Listener at localhost/42081] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-18 02:15:19,895 INFO [Listener at localhost/42081] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 02:15:19,895 INFO [Listener at localhost/42081] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-18 02:15:19,895 INFO [Listener at localhost/42081] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-18 02:15:19,897 INFO [Listener at localhost/42081] ipc.NettyRpcServer(120): Bind to /172.31.14.131:43727 2023-07-18 02:15:19,898 INFO [Listener at localhost/42081] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 02:15:19,899 INFO [Listener at localhost/42081] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 02:15:19,901 INFO [Listener at localhost/42081] zookeeper.RecoverableZooKeeper(93): Process identifier=master:43727 connecting to ZooKeeper ensemble=127.0.0.1:53987 2023-07-18 02:15:19,919 DEBUG [Listener at localhost/42081-EventThread] zookeeper.ZKWatcher(600): master:437270x0, quorum=127.0.0.1:53987, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 02:15:19,920 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:43727-0x101763659290000 connected 2023-07-18 02:15:19,985 DEBUG [Listener at localhost/42081] zookeeper.ZKUtil(164): master:43727-0x101763659290000, quorum=127.0.0.1:53987, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-18 02:15:19,986 DEBUG [Listener at localhost/42081] zookeeper.ZKUtil(164): master:43727-0x101763659290000, quorum=127.0.0.1:53987, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 02:15:19,986 DEBUG [Listener at localhost/42081] zookeeper.ZKUtil(164): master:43727-0x101763659290000, quorum=127.0.0.1:53987, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-18 02:15:19,993 DEBUG [Listener at localhost/42081] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=43727 2023-07-18 02:15:19,993 DEBUG [Listener at localhost/42081] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=43727 2023-07-18 02:15:19,994 DEBUG [Listener at localhost/42081] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=43727 2023-07-18 02:15:19,998 DEBUG [Listener at localhost/42081] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=43727 2023-07-18 02:15:19,999 DEBUG [Listener at localhost/42081] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=43727 2023-07-18 02:15:20,002 INFO [Listener at localhost/42081] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-18 02:15:20,002 INFO [Listener at localhost/42081] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-18 02:15:20,002 INFO [Listener at localhost/42081] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-18 02:15:20,003 INFO [Listener at localhost/42081] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-18 02:15:20,003 INFO [Listener at localhost/42081] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-18 02:15:20,003 INFO [Listener at localhost/42081] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-18 02:15:20,004 INFO [Listener at localhost/42081] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-18 02:15:20,004 INFO [Listener at localhost/42081] http.HttpServer(1146): Jetty bound to port 38967 2023-07-18 02:15:20,005 INFO [Listener at localhost/42081] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 02:15:20,020 INFO [Listener at localhost/42081] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 02:15:20,021 INFO [Listener at localhost/42081] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6bab3cdc{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/dcf32670-6396-6fa3-250d-b264fc0f6dfa/hadoop.log.dir/,AVAILABLE} 2023-07-18 02:15:20,021 INFO [Listener at localhost/42081] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 02:15:20,022 INFO [Listener at localhost/42081] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5959e4bb{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-18 02:15:20,150 INFO [Listener at localhost/42081] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-18 02:15:20,151 INFO [Listener at localhost/42081] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-18 02:15:20,151 INFO [Listener at localhost/42081] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-18 02:15:20,151 INFO [Listener at localhost/42081] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-18 02:15:20,155 INFO [Listener at localhost/42081] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 02:15:20,156 INFO [Listener at localhost/42081] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@19a648c7{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/dcf32670-6396-6fa3-250d-b264fc0f6dfa/java.io.tmpdir/jetty-0_0_0_0-38967-hbase-server-2_4_18-SNAPSHOT_jar-_-any-257467638877899970/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-18 02:15:20,157 INFO [Listener at localhost/42081] server.AbstractConnector(333): Started ServerConnector@17e6870e{HTTP/1.1, (http/1.1)}{0.0.0.0:38967} 2023-07-18 02:15:20,157 INFO [Listener at localhost/42081] server.Server(415): Started @38369ms 2023-07-18 02:15:20,157 INFO [Listener at localhost/42081] master.HMaster(444): hbase.rootdir=hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0, hbase.cluster.distributed=false 2023-07-18 02:15:20,178 INFO [Listener at localhost/42081] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-18 02:15:20,179 INFO [Listener at localhost/42081] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 02:15:20,179 INFO [Listener at localhost/42081] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-18 02:15:20,179 INFO [Listener at localhost/42081] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-18 02:15:20,179 INFO [Listener at localhost/42081] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 02:15:20,179 INFO [Listener at localhost/42081] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-18 02:15:20,179 INFO [Listener at localhost/42081] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-18 02:15:20,180 INFO [Listener at localhost/42081] ipc.NettyRpcServer(120): Bind to /172.31.14.131:37933 2023-07-18 02:15:20,181 INFO [Listener at localhost/42081] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-18 02:15:20,182 DEBUG [Listener at localhost/42081] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-18 02:15:20,183 INFO [Listener at localhost/42081] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 02:15:20,185 INFO [Listener at localhost/42081] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 02:15:20,186 INFO [Listener at localhost/42081] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:37933 connecting to ZooKeeper ensemble=127.0.0.1:53987 2023-07-18 02:15:20,190 DEBUG [Listener at localhost/42081-EventThread] zookeeper.ZKWatcher(600): regionserver:379330x0, quorum=127.0.0.1:53987, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 02:15:20,191 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:37933-0x101763659290001 connected 2023-07-18 02:15:20,191 DEBUG [Listener at localhost/42081] zookeeper.ZKUtil(164): regionserver:37933-0x101763659290001, quorum=127.0.0.1:53987, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-18 02:15:20,192 DEBUG [Listener at localhost/42081] zookeeper.ZKUtil(164): regionserver:37933-0x101763659290001, quorum=127.0.0.1:53987, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 02:15:20,192 DEBUG [Listener at localhost/42081] zookeeper.ZKUtil(164): regionserver:37933-0x101763659290001, quorum=127.0.0.1:53987, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-18 02:15:20,193 DEBUG [Listener at localhost/42081] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=37933 2023-07-18 02:15:20,193 DEBUG [Listener at localhost/42081] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=37933 2023-07-18 02:15:20,193 DEBUG [Listener at localhost/42081] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=37933 2023-07-18 02:15:20,198 DEBUG [Listener at localhost/42081] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=37933 2023-07-18 02:15:20,198 DEBUG [Listener at localhost/42081] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=37933 2023-07-18 02:15:20,201 INFO [Listener at localhost/42081] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-18 02:15:20,201 INFO [Listener at localhost/42081] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-18 02:15:20,201 INFO [Listener at localhost/42081] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-18 02:15:20,202 INFO [Listener at localhost/42081] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-18 02:15:20,202 INFO [Listener at localhost/42081] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-18 02:15:20,202 INFO [Listener at localhost/42081] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-18 02:15:20,202 INFO [Listener at localhost/42081] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-18 02:15:20,203 INFO [Listener at localhost/42081] http.HttpServer(1146): Jetty bound to port 37449 2023-07-18 02:15:20,203 INFO [Listener at localhost/42081] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 02:15:20,205 INFO [Listener at localhost/42081] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 02:15:20,206 INFO [Listener at localhost/42081] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@fb6e2b4{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/dcf32670-6396-6fa3-250d-b264fc0f6dfa/hadoop.log.dir/,AVAILABLE} 2023-07-18 02:15:20,206 INFO [Listener at localhost/42081] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 02:15:20,206 INFO [Listener at localhost/42081] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6c4b4518{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-18 02:15:20,320 INFO [Listener at localhost/42081] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-18 02:15:20,321 INFO [Listener at localhost/42081] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-18 02:15:20,321 INFO [Listener at localhost/42081] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-18 02:15:20,321 INFO [Listener at localhost/42081] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-18 02:15:20,322 INFO [Listener at localhost/42081] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 02:15:20,323 INFO [Listener at localhost/42081] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@6e657e93{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/dcf32670-6396-6fa3-250d-b264fc0f6dfa/java.io.tmpdir/jetty-0_0_0_0-37449-hbase-server-2_4_18-SNAPSHOT_jar-_-any-7948408542210484408/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-18 02:15:20,324 INFO [Listener at localhost/42081] server.AbstractConnector(333): Started ServerConnector@1c79df8{HTTP/1.1, (http/1.1)}{0.0.0.0:37449} 2023-07-18 02:15:20,324 INFO [Listener at localhost/42081] server.Server(415): Started @38536ms 2023-07-18 02:15:20,336 INFO [Listener at localhost/42081] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-18 02:15:20,336 INFO [Listener at localhost/42081] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 02:15:20,336 INFO [Listener at localhost/42081] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-18 02:15:20,336 INFO [Listener at localhost/42081] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-18 02:15:20,336 INFO [Listener at localhost/42081] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 02:15:20,336 INFO [Listener at localhost/42081] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-18 02:15:20,336 INFO [Listener at localhost/42081] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-18 02:15:20,337 INFO [Listener at localhost/42081] ipc.NettyRpcServer(120): Bind to /172.31.14.131:46199 2023-07-18 02:15:20,337 INFO [Listener at localhost/42081] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-18 02:15:20,338 DEBUG [Listener at localhost/42081] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-18 02:15:20,339 INFO [Listener at localhost/42081] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 02:15:20,340 INFO [Listener at localhost/42081] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 02:15:20,341 INFO [Listener at localhost/42081] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:46199 connecting to ZooKeeper ensemble=127.0.0.1:53987 2023-07-18 02:15:20,344 DEBUG [Listener at localhost/42081-EventThread] zookeeper.ZKWatcher(600): regionserver:461990x0, quorum=127.0.0.1:53987, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 02:15:20,346 DEBUG [Listener at localhost/42081] zookeeper.ZKUtil(164): regionserver:461990x0, quorum=127.0.0.1:53987, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-18 02:15:20,346 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:46199-0x101763659290002 connected 2023-07-18 02:15:20,347 DEBUG [Listener at localhost/42081] zookeeper.ZKUtil(164): regionserver:46199-0x101763659290002, quorum=127.0.0.1:53987, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 02:15:20,347 DEBUG [Listener at localhost/42081] zookeeper.ZKUtil(164): regionserver:46199-0x101763659290002, quorum=127.0.0.1:53987, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-18 02:15:20,350 DEBUG [Listener at localhost/42081] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=46199 2023-07-18 02:15:20,351 DEBUG [Listener at localhost/42081] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=46199 2023-07-18 02:15:20,352 DEBUG [Listener at localhost/42081] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=46199 2023-07-18 02:15:20,354 DEBUG [Listener at localhost/42081] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=46199 2023-07-18 02:15:20,355 DEBUG [Listener at localhost/42081] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=46199 2023-07-18 02:15:20,357 INFO [Listener at localhost/42081] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-18 02:15:20,357 INFO [Listener at localhost/42081] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-18 02:15:20,358 INFO [Listener at localhost/42081] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-18 02:15:20,358 INFO [Listener at localhost/42081] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-18 02:15:20,358 INFO [Listener at localhost/42081] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-18 02:15:20,359 INFO [Listener at localhost/42081] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-18 02:15:20,359 INFO [Listener at localhost/42081] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-18 02:15:20,360 INFO [Listener at localhost/42081] http.HttpServer(1146): Jetty bound to port 39983 2023-07-18 02:15:20,360 INFO [Listener at localhost/42081] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 02:15:20,362 INFO [Listener at localhost/42081] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 02:15:20,362 INFO [Listener at localhost/42081] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@36c8a553{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/dcf32670-6396-6fa3-250d-b264fc0f6dfa/hadoop.log.dir/,AVAILABLE} 2023-07-18 02:15:20,363 INFO [Listener at localhost/42081] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 02:15:20,363 INFO [Listener at localhost/42081] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@606d5d1f{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-18 02:15:20,481 INFO [Listener at localhost/42081] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-18 02:15:20,482 INFO [Listener at localhost/42081] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-18 02:15:20,483 INFO [Listener at localhost/42081] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-18 02:15:20,483 INFO [Listener at localhost/42081] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-18 02:15:20,484 INFO [Listener at localhost/42081] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 02:15:20,484 INFO [Listener at localhost/42081] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@5018e9ad{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/dcf32670-6396-6fa3-250d-b264fc0f6dfa/java.io.tmpdir/jetty-0_0_0_0-39983-hbase-server-2_4_18-SNAPSHOT_jar-_-any-4847407649045755936/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-18 02:15:20,486 INFO [Listener at localhost/42081] server.AbstractConnector(333): Started ServerConnector@7f94613e{HTTP/1.1, (http/1.1)}{0.0.0.0:39983} 2023-07-18 02:15:20,486 INFO [Listener at localhost/42081] server.Server(415): Started @38697ms 2023-07-18 02:15:20,498 INFO [Listener at localhost/42081] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-18 02:15:20,498 INFO [Listener at localhost/42081] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 02:15:20,498 INFO [Listener at localhost/42081] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-18 02:15:20,498 INFO [Listener at localhost/42081] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-18 02:15:20,498 INFO [Listener at localhost/42081] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 02:15:20,498 INFO [Listener at localhost/42081] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-18 02:15:20,498 INFO [Listener at localhost/42081] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-18 02:15:20,499 INFO [Listener at localhost/42081] ipc.NettyRpcServer(120): Bind to /172.31.14.131:36883 2023-07-18 02:15:20,499 INFO [Listener at localhost/42081] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-18 02:15:20,501 DEBUG [Listener at localhost/42081] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-18 02:15:20,501 INFO [Listener at localhost/42081] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 02:15:20,502 INFO [Listener at localhost/42081] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 02:15:20,503 INFO [Listener at localhost/42081] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:36883 connecting to ZooKeeper ensemble=127.0.0.1:53987 2023-07-18 02:15:20,508 DEBUG [Listener at localhost/42081-EventThread] zookeeper.ZKWatcher(600): regionserver:368830x0, quorum=127.0.0.1:53987, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 02:15:20,509 DEBUG [Listener at localhost/42081] zookeeper.ZKUtil(164): regionserver:368830x0, quorum=127.0.0.1:53987, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-18 02:15:20,510 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:36883-0x101763659290003 connected 2023-07-18 02:15:20,510 DEBUG [Listener at localhost/42081] zookeeper.ZKUtil(164): regionserver:36883-0x101763659290003, quorum=127.0.0.1:53987, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 02:15:20,511 DEBUG [Listener at localhost/42081] zookeeper.ZKUtil(164): regionserver:36883-0x101763659290003, quorum=127.0.0.1:53987, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-18 02:15:20,511 DEBUG [Listener at localhost/42081] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=36883 2023-07-18 02:15:20,512 DEBUG [Listener at localhost/42081] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=36883 2023-07-18 02:15:20,515 DEBUG [Listener at localhost/42081] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=36883 2023-07-18 02:15:20,516 DEBUG [Listener at localhost/42081] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=36883 2023-07-18 02:15:20,516 DEBUG [Listener at localhost/42081] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=36883 2023-07-18 02:15:20,518 INFO [Listener at localhost/42081] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-18 02:15:20,518 INFO [Listener at localhost/42081] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-18 02:15:20,518 INFO [Listener at localhost/42081] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-18 02:15:20,519 INFO [Listener at localhost/42081] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-18 02:15:20,519 INFO [Listener at localhost/42081] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-18 02:15:20,519 INFO [Listener at localhost/42081] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-18 02:15:20,519 INFO [Listener at localhost/42081] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-18 02:15:20,520 INFO [Listener at localhost/42081] http.HttpServer(1146): Jetty bound to port 38823 2023-07-18 02:15:20,520 INFO [Listener at localhost/42081] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 02:15:20,524 INFO [Listener at localhost/42081] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 02:15:20,524 INFO [Listener at localhost/42081] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@125559c0{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/dcf32670-6396-6fa3-250d-b264fc0f6dfa/hadoop.log.dir/,AVAILABLE} 2023-07-18 02:15:20,525 INFO [Listener at localhost/42081] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 02:15:20,525 INFO [Listener at localhost/42081] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@3d052187{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-18 02:15:20,658 INFO [Listener at localhost/42081] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-18 02:15:20,659 INFO [Listener at localhost/42081] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-18 02:15:20,659 INFO [Listener at localhost/42081] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-18 02:15:20,660 INFO [Listener at localhost/42081] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-18 02:15:20,660 INFO [Listener at localhost/42081] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 02:15:20,661 INFO [Listener at localhost/42081] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@277c98ff{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/dcf32670-6396-6fa3-250d-b264fc0f6dfa/java.io.tmpdir/jetty-0_0_0_0-38823-hbase-server-2_4_18-SNAPSHOT_jar-_-any-4142694575424997222/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-18 02:15:20,663 INFO [Listener at localhost/42081] server.AbstractConnector(333): Started ServerConnector@2959891b{HTTP/1.1, (http/1.1)}{0.0.0.0:38823} 2023-07-18 02:15:20,663 INFO [Listener at localhost/42081] server.Server(415): Started @38874ms 2023-07-18 02:15:20,666 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 02:15:20,670 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@50d1e9c0{HTTP/1.1, (http/1.1)}{0.0.0.0:35437} 2023-07-18 02:15:20,670 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @38882ms 2023-07-18 02:15:20,670 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,43727,1689646519894 2023-07-18 02:15:20,675 DEBUG [Listener at localhost/42081-EventThread] zookeeper.ZKWatcher(600): master:43727-0x101763659290000, quorum=127.0.0.1:53987, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-18 02:15:20,675 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:43727-0x101763659290000, quorum=127.0.0.1:53987, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,43727,1689646519894 2023-07-18 02:15:20,677 DEBUG [Listener at localhost/42081-EventThread] zookeeper.ZKWatcher(600): regionserver:46199-0x101763659290002, quorum=127.0.0.1:53987, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-18 02:15:20,677 DEBUG [Listener at localhost/42081-EventThread] zookeeper.ZKWatcher(600): regionserver:37933-0x101763659290001, quorum=127.0.0.1:53987, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-18 02:15:20,677 DEBUG [Listener at localhost/42081-EventThread] zookeeper.ZKWatcher(600): regionserver:36883-0x101763659290003, quorum=127.0.0.1:53987, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-18 02:15:20,677 DEBUG [Listener at localhost/42081-EventThread] zookeeper.ZKWatcher(600): master:43727-0x101763659290000, quorum=127.0.0.1:53987, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-18 02:15:20,678 DEBUG [Listener at localhost/42081-EventThread] zookeeper.ZKWatcher(600): master:43727-0x101763659290000, quorum=127.0.0.1:53987, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 02:15:20,681 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:43727-0x101763659290000, quorum=127.0.0.1:53987, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-18 02:15:20,682 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:43727-0x101763659290000, quorum=127.0.0.1:53987, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-18 02:15:20,682 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,43727,1689646519894 from backup master directory 2023-07-18 02:15:20,683 DEBUG [Listener at localhost/42081-EventThread] zookeeper.ZKWatcher(600): master:43727-0x101763659290000, quorum=127.0.0.1:53987, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,43727,1689646519894 2023-07-18 02:15:20,683 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-18 02:15:20,683 DEBUG [Listener at localhost/42081-EventThread] zookeeper.ZKWatcher(600): master:43727-0x101763659290000, quorum=127.0.0.1:53987, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-18 02:15:20,683 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,43727,1689646519894 2023-07-18 02:15:20,708 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/hbase.id with ID: 1e04bdab-69fe-4921-9cc8-31aef44fbb43 2023-07-18 02:15:20,722 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 02:15:20,726 DEBUG [Listener at localhost/42081-EventThread] zookeeper.ZKWatcher(600): master:43727-0x101763659290000, quorum=127.0.0.1:53987, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 02:15:20,751 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x6fc90313 to 127.0.0.1:53987 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 02:15:20,755 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@33931863, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 02:15:20,756 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 02:15:20,756 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-18 02:15:20,757 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 02:15:20,759 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/MasterData/data/master/store-tmp 2023-07-18 02:15:21,200 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 02:15:21,200 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-18 02:15:21,200 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 02:15:21,200 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 02:15:21,200 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-18 02:15:21,200 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 02:15:21,200 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 02:15:21,200 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-18 02:15:21,201 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/MasterData/WALs/jenkins-hbase4.apache.org,43727,1689646519894 2023-07-18 02:15:21,204 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C43727%2C1689646519894, suffix=, logDir=hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/MasterData/WALs/jenkins-hbase4.apache.org,43727,1689646519894, archiveDir=hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/MasterData/oldWALs, maxLogs=10 2023-07-18 02:15:21,220 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34755,DS-731b220f-5a89-40dd-8367-668303e01b62,DISK] 2023-07-18 02:15:21,221 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44711,DS-2e88d1cf-ff15-4411-83b7-f35d82e686b5,DISK] 2023-07-18 02:15:21,221 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44607,DS-13383ffd-37a5-4adb-9bed-3db45d050a8d,DISK] 2023-07-18 02:15:21,223 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/MasterData/WALs/jenkins-hbase4.apache.org,43727,1689646519894/jenkins-hbase4.apache.org%2C43727%2C1689646519894.1689646521204 2023-07-18 02:15:21,223 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34755,DS-731b220f-5a89-40dd-8367-668303e01b62,DISK], DatanodeInfoWithStorage[127.0.0.1:44711,DS-2e88d1cf-ff15-4411-83b7-f35d82e686b5,DISK], DatanodeInfoWithStorage[127.0.0.1:44607,DS-13383ffd-37a5-4adb-9bed-3db45d050a8d,DISK]] 2023-07-18 02:15:21,223 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-18 02:15:21,224 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 02:15:21,224 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-18 02:15:21,224 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-18 02:15:21,227 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-18 02:15:21,228 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-18 02:15:21,229 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-18 02:15:21,229 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 02:15:21,230 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-18 02:15:21,230 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-18 02:15:21,233 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-18 02:15:21,238 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 02:15:21,238 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10367588320, jitterRate=-0.034443095326423645}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 02:15:21,238 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-18 02:15:21,238 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-18 02:15:21,240 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-18 02:15:21,240 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-18 02:15:21,240 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-18 02:15:21,241 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-18 02:15:21,241 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-07-18 02:15:21,241 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-18 02:15:21,242 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-18 02:15:21,243 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-18 02:15:21,244 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43727-0x101763659290000, quorum=127.0.0.1:53987, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-18 02:15:21,244 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-18 02:15:21,244 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43727-0x101763659290000, quorum=127.0.0.1:53987, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-18 02:15:21,246 DEBUG [Listener at localhost/42081-EventThread] zookeeper.ZKWatcher(600): master:43727-0x101763659290000, quorum=127.0.0.1:53987, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 02:15:21,247 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43727-0x101763659290000, quorum=127.0.0.1:53987, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-18 02:15:21,247 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43727-0x101763659290000, quorum=127.0.0.1:53987, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-18 02:15:21,248 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43727-0x101763659290000, quorum=127.0.0.1:53987, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-18 02:15:21,249 DEBUG [Listener at localhost/42081-EventThread] zookeeper.ZKWatcher(600): regionserver:36883-0x101763659290003, quorum=127.0.0.1:53987, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-18 02:15:21,249 DEBUG [Listener at localhost/42081-EventThread] zookeeper.ZKWatcher(600): master:43727-0x101763659290000, quorum=127.0.0.1:53987, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-18 02:15:21,249 DEBUG [Listener at localhost/42081-EventThread] zookeeper.ZKWatcher(600): regionserver:46199-0x101763659290002, quorum=127.0.0.1:53987, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-18 02:15:21,249 DEBUG [Listener at localhost/42081-EventThread] zookeeper.ZKWatcher(600): master:43727-0x101763659290000, quorum=127.0.0.1:53987, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 02:15:21,249 DEBUG [Listener at localhost/42081-EventThread] zookeeper.ZKWatcher(600): regionserver:37933-0x101763659290001, quorum=127.0.0.1:53987, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-18 02:15:21,250 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,43727,1689646519894, sessionid=0x101763659290000, setting cluster-up flag (Was=false) 2023-07-18 02:15:21,255 DEBUG [Listener at localhost/42081-EventThread] zookeeper.ZKWatcher(600): master:43727-0x101763659290000, quorum=127.0.0.1:53987, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 02:15:21,259 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-18 02:15:21,260 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,43727,1689646519894 2023-07-18 02:15:21,264 DEBUG [Listener at localhost/42081-EventThread] zookeeper.ZKWatcher(600): master:43727-0x101763659290000, quorum=127.0.0.1:53987, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 02:15:21,269 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-18 02:15:21,270 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,43727,1689646519894 2023-07-18 02:15:21,270 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/.hbase-snapshot/.tmp 2023-07-18 02:15:21,272 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-18 02:15:21,272 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-18 02:15:21,273 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-18 02:15:21,273 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,43727,1689646519894] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-18 02:15:21,273 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-18 02:15:21,274 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver loaded, priority=536870913. 2023-07-18 02:15:21,275 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-18 02:15:21,288 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-18 02:15:21,288 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-18 02:15:21,288 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-18 02:15:21,288 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-18 02:15:21,288 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-18 02:15:21,288 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-18 02:15:21,288 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-18 02:15:21,288 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-18 02:15:21,288 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-18 02:15:21,288 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:15:21,288 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-18 02:15:21,288 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:15:21,290 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689646551290 2023-07-18 02:15:21,291 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-18 02:15:21,291 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-18 02:15:21,291 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-18 02:15:21,291 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-18 02:15:21,291 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-18 02:15:21,291 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-18 02:15:21,291 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-18 02:15:21,291 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-18 02:15:21,291 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-18 02:15:21,292 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-18 02:15:21,292 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-18 02:15:21,292 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-18 02:15:21,293 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-18 02:15:21,293 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-18 02:15:21,293 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689646521293,5,FailOnTimeoutGroup] 2023-07-18 02:15:21,293 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689646521293,5,FailOnTimeoutGroup] 2023-07-18 02:15:21,293 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-18 02:15:21,293 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-18 02:15:21,293 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-18 02:15:21,293 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-18 02:15:21,293 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-18 02:15:21,304 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-18 02:15:21,305 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-18 02:15:21,305 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0 2023-07-18 02:15:21,316 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 02:15:21,318 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-18 02:15:21,319 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/data/hbase/meta/1588230740/info 2023-07-18 02:15:21,319 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-18 02:15:21,320 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 02:15:21,320 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-18 02:15:21,321 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/data/hbase/meta/1588230740/rep_barrier 2023-07-18 02:15:21,321 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-18 02:15:21,322 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 02:15:21,322 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-18 02:15:21,323 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/data/hbase/meta/1588230740/table 2023-07-18 02:15:21,323 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-18 02:15:21,324 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 02:15:21,324 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/data/hbase/meta/1588230740 2023-07-18 02:15:21,325 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/data/hbase/meta/1588230740 2023-07-18 02:15:21,327 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-18 02:15:21,328 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-18 02:15:21,332 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 02:15:21,333 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11955008640, jitterRate=0.11339694261550903}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-18 02:15:21,333 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-18 02:15:21,333 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-18 02:15:21,333 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-18 02:15:21,333 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-18 02:15:21,333 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-18 02:15:21,333 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-18 02:15:21,333 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-18 02:15:21,333 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-18 02:15:21,334 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-18 02:15:21,334 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-18 02:15:21,334 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-18 02:15:21,335 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-18 02:15:21,336 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-18 02:15:21,366 INFO [RS:1;jenkins-hbase4:46199] regionserver.HRegionServer(951): ClusterId : 1e04bdab-69fe-4921-9cc8-31aef44fbb43 2023-07-18 02:15:21,366 INFO [RS:0;jenkins-hbase4:37933] regionserver.HRegionServer(951): ClusterId : 1e04bdab-69fe-4921-9cc8-31aef44fbb43 2023-07-18 02:15:21,367 DEBUG [RS:1;jenkins-hbase4:46199] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-18 02:15:21,368 DEBUG [RS:0;jenkins-hbase4:37933] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-18 02:15:21,366 INFO [RS:2;jenkins-hbase4:36883] regionserver.HRegionServer(951): ClusterId : 1e04bdab-69fe-4921-9cc8-31aef44fbb43 2023-07-18 02:15:21,370 DEBUG [RS:2;jenkins-hbase4:36883] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-18 02:15:21,372 DEBUG [RS:0;jenkins-hbase4:37933] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-18 02:15:21,372 DEBUG [RS:1;jenkins-hbase4:46199] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-18 02:15:21,372 DEBUG [RS:1;jenkins-hbase4:46199] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-18 02:15:21,372 DEBUG [RS:0;jenkins-hbase4:37933] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-18 02:15:21,372 DEBUG [RS:2;jenkins-hbase4:36883] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-18 02:15:21,372 DEBUG [RS:2;jenkins-hbase4:36883] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-18 02:15:21,375 DEBUG [RS:1;jenkins-hbase4:46199] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-18 02:15:21,377 DEBUG [RS:2;jenkins-hbase4:36883] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-18 02:15:21,377 DEBUG [RS:0;jenkins-hbase4:37933] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-18 02:15:21,378 DEBUG [RS:1;jenkins-hbase4:46199] zookeeper.ReadOnlyZKClient(139): Connect 0x187f46a4 to 127.0.0.1:53987 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 02:15:21,378 DEBUG [RS:2;jenkins-hbase4:36883] zookeeper.ReadOnlyZKClient(139): Connect 0x233a5539 to 127.0.0.1:53987 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 02:15:21,378 DEBUG [RS:0;jenkins-hbase4:37933] zookeeper.ReadOnlyZKClient(139): Connect 0x0af16196 to 127.0.0.1:53987 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 02:15:21,388 DEBUG [RS:1;jenkins-hbase4:46199] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@16b8abdb, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 02:15:21,388 DEBUG [RS:2;jenkins-hbase4:36883] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2dfdaf2e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 02:15:21,389 DEBUG [RS:1;jenkins-hbase4:46199] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5785a1e0, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-18 02:15:21,389 DEBUG [RS:0;jenkins-hbase4:37933] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@90ae6, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 02:15:21,389 DEBUG [RS:2;jenkins-hbase4:36883] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6cc0a515, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-18 02:15:21,389 DEBUG [RS:0;jenkins-hbase4:37933] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@450b8483, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-18 02:15:21,398 DEBUG [RS:1;jenkins-hbase4:46199] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:46199 2023-07-18 02:15:21,398 INFO [RS:1;jenkins-hbase4:46199] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-18 02:15:21,398 INFO [RS:1;jenkins-hbase4:46199] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-18 02:15:21,398 DEBUG [RS:1;jenkins-hbase4:46199] regionserver.HRegionServer(1022): About to register with Master. 2023-07-18 02:15:21,399 INFO [RS:1;jenkins-hbase4:46199] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,43727,1689646519894 with isa=jenkins-hbase4.apache.org/172.31.14.131:46199, startcode=1689646520335 2023-07-18 02:15:21,399 DEBUG [RS:1;jenkins-hbase4:46199] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-18 02:15:21,401 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:57781, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.5 (auth:SIMPLE), service=RegionServerStatusService 2023-07-18 02:15:21,403 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=43727] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,46199,1689646520335 2023-07-18 02:15:21,403 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,43727,1689646519894] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-18 02:15:21,403 DEBUG [RS:2;jenkins-hbase4:36883] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:36883 2023-07-18 02:15:21,404 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,43727,1689646519894] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-18 02:15:21,404 INFO [RS:2;jenkins-hbase4:36883] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-18 02:15:21,404 INFO [RS:2;jenkins-hbase4:36883] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-18 02:15:21,403 DEBUG [RS:0;jenkins-hbase4:37933] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:37933 2023-07-18 02:15:21,404 DEBUG [RS:2;jenkins-hbase4:36883] regionserver.HRegionServer(1022): About to register with Master. 2023-07-18 02:15:21,404 DEBUG [RS:1;jenkins-hbase4:46199] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0 2023-07-18 02:15:21,404 INFO [RS:0;jenkins-hbase4:37933] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-18 02:15:21,404 INFO [RS:0;jenkins-hbase4:37933] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-18 02:15:21,404 DEBUG [RS:1;jenkins-hbase4:46199] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:45369 2023-07-18 02:15:21,404 DEBUG [RS:0;jenkins-hbase4:37933] regionserver.HRegionServer(1022): About to register with Master. 2023-07-18 02:15:21,404 DEBUG [RS:1;jenkins-hbase4:46199] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=38967 2023-07-18 02:15:21,405 INFO [RS:2;jenkins-hbase4:36883] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,43727,1689646519894 with isa=jenkins-hbase4.apache.org/172.31.14.131:36883, startcode=1689646520497 2023-07-18 02:15:21,405 DEBUG [RS:2;jenkins-hbase4:36883] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-18 02:15:21,406 DEBUG [Listener at localhost/42081-EventThread] zookeeper.ZKWatcher(600): master:43727-0x101763659290000, quorum=127.0.0.1:53987, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 02:15:21,406 INFO [RS:0;jenkins-hbase4:37933] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,43727,1689646519894 with isa=jenkins-hbase4.apache.org/172.31.14.131:37933, startcode=1689646520178 2023-07-18 02:15:21,407 INFO [RS-EventLoopGroup-8-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:53311, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.6 (auth:SIMPLE), service=RegionServerStatusService 2023-07-18 02:15:21,407 DEBUG [RS:0;jenkins-hbase4:37933] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-18 02:15:21,407 DEBUG [RS:1;jenkins-hbase4:46199] zookeeper.ZKUtil(162): regionserver:46199-0x101763659290002, quorum=127.0.0.1:53987, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46199,1689646520335 2023-07-18 02:15:21,407 WARN [RS:1;jenkins-hbase4:46199] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-18 02:15:21,407 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=43727] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,36883,1689646520497 2023-07-18 02:15:21,407 INFO [RS:1;jenkins-hbase4:46199] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 02:15:21,407 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,43727,1689646519894] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-18 02:15:21,408 DEBUG [RS:1;jenkins-hbase4:46199] regionserver.HRegionServer(1948): logDir=hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/WALs/jenkins-hbase4.apache.org,46199,1689646520335 2023-07-18 02:15:21,408 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,43727,1689646519894] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-18 02:15:21,408 DEBUG [RS:2;jenkins-hbase4:36883] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0 2023-07-18 02:15:21,408 DEBUG [RS:2;jenkins-hbase4:36883] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:45369 2023-07-18 02:15:21,408 DEBUG [RS:2;jenkins-hbase4:36883] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=38967 2023-07-18 02:15:21,409 INFO [RS-EventLoopGroup-8-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:35797, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.4 (auth:SIMPLE), service=RegionServerStatusService 2023-07-18 02:15:21,409 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,46199,1689646520335] 2023-07-18 02:15:21,410 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=43727] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,37933,1689646520178 2023-07-18 02:15:21,410 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,43727,1689646519894] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-18 02:15:21,410 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,43727,1689646519894] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-18 02:15:21,410 DEBUG [RS:0;jenkins-hbase4:37933] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0 2023-07-18 02:15:21,410 DEBUG [RS:0;jenkins-hbase4:37933] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:45369 2023-07-18 02:15:21,410 DEBUG [RS:0;jenkins-hbase4:37933] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=38967 2023-07-18 02:15:21,416 DEBUG [Listener at localhost/42081-EventThread] zookeeper.ZKWatcher(600): master:43727-0x101763659290000, quorum=127.0.0.1:53987, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 02:15:21,417 DEBUG [RS:2;jenkins-hbase4:36883] zookeeper.ZKUtil(162): regionserver:36883-0x101763659290003, quorum=127.0.0.1:53987, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36883,1689646520497 2023-07-18 02:15:21,417 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,36883,1689646520497] 2023-07-18 02:15:21,417 WARN [RS:2;jenkins-hbase4:36883] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-18 02:15:21,417 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,37933,1689646520178] 2023-07-18 02:15:21,417 INFO [RS:2;jenkins-hbase4:36883] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 02:15:21,417 DEBUG [RS:0;jenkins-hbase4:37933] zookeeper.ZKUtil(162): regionserver:37933-0x101763659290001, quorum=127.0.0.1:53987, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37933,1689646520178 2023-07-18 02:15:21,417 DEBUG [RS:2;jenkins-hbase4:36883] regionserver.HRegionServer(1948): logDir=hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/WALs/jenkins-hbase4.apache.org,36883,1689646520497 2023-07-18 02:15:21,417 WARN [RS:0;jenkins-hbase4:37933] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-18 02:15:21,418 INFO [RS:0;jenkins-hbase4:37933] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 02:15:21,418 DEBUG [RS:0;jenkins-hbase4:37933] regionserver.HRegionServer(1948): logDir=hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/WALs/jenkins-hbase4.apache.org,37933,1689646520178 2023-07-18 02:15:21,418 DEBUG [RS:1;jenkins-hbase4:46199] zookeeper.ZKUtil(162): regionserver:46199-0x101763659290002, quorum=127.0.0.1:53987, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36883,1689646520497 2023-07-18 02:15:21,418 DEBUG [RS:1;jenkins-hbase4:46199] zookeeper.ZKUtil(162): regionserver:46199-0x101763659290002, quorum=127.0.0.1:53987, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37933,1689646520178 2023-07-18 02:15:21,419 DEBUG [RS:1;jenkins-hbase4:46199] zookeeper.ZKUtil(162): regionserver:46199-0x101763659290002, quorum=127.0.0.1:53987, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46199,1689646520335 2023-07-18 02:15:21,422 DEBUG [RS:1;jenkins-hbase4:46199] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-18 02:15:21,422 INFO [RS:1;jenkins-hbase4:46199] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-18 02:15:21,422 DEBUG [RS:2;jenkins-hbase4:36883] zookeeper.ZKUtil(162): regionserver:36883-0x101763659290003, quorum=127.0.0.1:53987, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36883,1689646520497 2023-07-18 02:15:21,422 DEBUG [RS:0;jenkins-hbase4:37933] zookeeper.ZKUtil(162): regionserver:37933-0x101763659290001, quorum=127.0.0.1:53987, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36883,1689646520497 2023-07-18 02:15:21,423 DEBUG [RS:0;jenkins-hbase4:37933] zookeeper.ZKUtil(162): regionserver:37933-0x101763659290001, quorum=127.0.0.1:53987, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37933,1689646520178 2023-07-18 02:15:21,423 DEBUG [RS:2;jenkins-hbase4:36883] zookeeper.ZKUtil(162): regionserver:36883-0x101763659290003, quorum=127.0.0.1:53987, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37933,1689646520178 2023-07-18 02:15:21,423 DEBUG [RS:2;jenkins-hbase4:36883] zookeeper.ZKUtil(162): regionserver:36883-0x101763659290003, quorum=127.0.0.1:53987, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46199,1689646520335 2023-07-18 02:15:21,423 DEBUG [RS:0;jenkins-hbase4:37933] zookeeper.ZKUtil(162): regionserver:37933-0x101763659290001, quorum=127.0.0.1:53987, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46199,1689646520335 2023-07-18 02:15:21,424 INFO [RS:1;jenkins-hbase4:46199] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-18 02:15:21,424 INFO [RS:1;jenkins-hbase4:46199] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-18 02:15:21,424 INFO [RS:1;jenkins-hbase4:46199] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 02:15:21,424 DEBUG [RS:2;jenkins-hbase4:36883] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-18 02:15:21,424 DEBUG [RS:0;jenkins-hbase4:37933] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-18 02:15:21,424 INFO [RS:2;jenkins-hbase4:36883] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-18 02:15:21,424 INFO [RS:0;jenkins-hbase4:37933] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-18 02:15:21,426 INFO [RS:1;jenkins-hbase4:46199] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-18 02:15:21,427 INFO [RS:0;jenkins-hbase4:37933] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-18 02:15:21,427 INFO [RS:2;jenkins-hbase4:36883] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-18 02:15:21,427 INFO [RS:0;jenkins-hbase4:37933] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-18 02:15:21,428 INFO [RS:0;jenkins-hbase4:37933] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 02:15:21,428 INFO [RS:2;jenkins-hbase4:36883] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-18 02:15:21,429 INFO [RS:2;jenkins-hbase4:36883] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 02:15:21,429 INFO [RS:1;jenkins-hbase4:46199] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-18 02:15:21,429 INFO [RS:0;jenkins-hbase4:37933] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-18 02:15:21,429 INFO [RS:2;jenkins-hbase4:36883] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-18 02:15:21,430 DEBUG [RS:1;jenkins-hbase4:46199] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:15:21,430 DEBUG [RS:1;jenkins-hbase4:46199] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:15:21,430 DEBUG [RS:1;jenkins-hbase4:46199] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:15:21,430 DEBUG [RS:1;jenkins-hbase4:46199] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:15:21,430 DEBUG [RS:1;jenkins-hbase4:46199] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:15:21,430 DEBUG [RS:1;jenkins-hbase4:46199] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-18 02:15:21,430 DEBUG [RS:1;jenkins-hbase4:46199] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:15:21,430 DEBUG [RS:1;jenkins-hbase4:46199] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:15:21,430 DEBUG [RS:1;jenkins-hbase4:46199] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:15:21,431 DEBUG [RS:1;jenkins-hbase4:46199] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:15:21,431 INFO [RS:2;jenkins-hbase4:36883] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-18 02:15:21,431 INFO [RS:0;jenkins-hbase4:37933] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-18 02:15:21,434 INFO [RS:1;jenkins-hbase4:46199] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 02:15:21,431 DEBUG [RS:2;jenkins-hbase4:36883] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:15:21,435 INFO [RS:1;jenkins-hbase4:46199] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 02:15:21,435 DEBUG [RS:2;jenkins-hbase4:36883] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:15:21,435 INFO [RS:1;jenkins-hbase4:46199] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-18 02:15:21,435 DEBUG [RS:2;jenkins-hbase4:36883] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:15:21,435 DEBUG [RS:0;jenkins-hbase4:37933] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:15:21,435 DEBUG [RS:2;jenkins-hbase4:36883] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:15:21,435 INFO [RS:1;jenkins-hbase4:46199] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-18 02:15:21,435 DEBUG [RS:0;jenkins-hbase4:37933] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:15:21,435 DEBUG [RS:2;jenkins-hbase4:36883] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:15:21,435 DEBUG [RS:0;jenkins-hbase4:37933] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:15:21,436 DEBUG [RS:2;jenkins-hbase4:36883] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-18 02:15:21,436 DEBUG [RS:0;jenkins-hbase4:37933] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:15:21,436 DEBUG [RS:2;jenkins-hbase4:36883] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:15:21,436 DEBUG [RS:0;jenkins-hbase4:37933] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:15:21,436 DEBUG [RS:2;jenkins-hbase4:36883] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:15:21,436 DEBUG [RS:0;jenkins-hbase4:37933] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-18 02:15:21,436 DEBUG [RS:2;jenkins-hbase4:36883] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:15:21,436 DEBUG [RS:0;jenkins-hbase4:37933] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:15:21,436 DEBUG [RS:2;jenkins-hbase4:36883] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:15:21,436 DEBUG [RS:0;jenkins-hbase4:37933] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:15:21,436 DEBUG [RS:0;jenkins-hbase4:37933] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:15:21,436 DEBUG [RS:0;jenkins-hbase4:37933] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:15:21,440 INFO [RS:2;jenkins-hbase4:36883] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 02:15:21,440 INFO [RS:2;jenkins-hbase4:36883] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 02:15:21,440 INFO [RS:2;jenkins-hbase4:36883] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-18 02:15:21,441 INFO [RS:2;jenkins-hbase4:36883] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-18 02:15:21,441 INFO [RS:0;jenkins-hbase4:37933] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 02:15:21,441 INFO [RS:0;jenkins-hbase4:37933] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 02:15:21,441 INFO [RS:0;jenkins-hbase4:37933] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-18 02:15:21,441 INFO [RS:0;jenkins-hbase4:37933] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-18 02:15:21,447 INFO [RS:1;jenkins-hbase4:46199] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-18 02:15:21,448 INFO [RS:1;jenkins-hbase4:46199] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,46199,1689646520335-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 02:15:21,453 INFO [RS:0;jenkins-hbase4:37933] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-18 02:15:21,453 INFO [RS:2;jenkins-hbase4:36883] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-18 02:15:21,453 INFO [RS:0;jenkins-hbase4:37933] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37933,1689646520178-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 02:15:21,453 INFO [RS:2;jenkins-hbase4:36883] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36883,1689646520497-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 02:15:21,462 INFO [RS:1;jenkins-hbase4:46199] regionserver.Replication(203): jenkins-hbase4.apache.org,46199,1689646520335 started 2023-07-18 02:15:21,462 INFO [RS:1;jenkins-hbase4:46199] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,46199,1689646520335, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:46199, sessionid=0x101763659290002 2023-07-18 02:15:21,462 DEBUG [RS:1;jenkins-hbase4:46199] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-18 02:15:21,462 DEBUG [RS:1;jenkins-hbase4:46199] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,46199,1689646520335 2023-07-18 02:15:21,462 DEBUG [RS:1;jenkins-hbase4:46199] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,46199,1689646520335' 2023-07-18 02:15:21,462 DEBUG [RS:1;jenkins-hbase4:46199] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-18 02:15:21,463 DEBUG [RS:1;jenkins-hbase4:46199] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-18 02:15:21,463 DEBUG [RS:1;jenkins-hbase4:46199] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-18 02:15:21,463 DEBUG [RS:1;jenkins-hbase4:46199] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-18 02:15:21,463 DEBUG [RS:1;jenkins-hbase4:46199] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,46199,1689646520335 2023-07-18 02:15:21,463 DEBUG [RS:1;jenkins-hbase4:46199] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,46199,1689646520335' 2023-07-18 02:15:21,463 DEBUG [RS:1;jenkins-hbase4:46199] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-18 02:15:21,463 DEBUG [RS:1;jenkins-hbase4:46199] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-18 02:15:21,464 DEBUG [RS:1;jenkins-hbase4:46199] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-18 02:15:21,464 INFO [RS:1;jenkins-hbase4:46199] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-18 02:15:21,466 INFO [RS:1;jenkins-hbase4:46199] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-18 02:15:21,467 DEBUG [RS:1;jenkins-hbase4:46199] zookeeper.ZKUtil(398): regionserver:46199-0x101763659290002, quorum=127.0.0.1:53987, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-18 02:15:21,467 INFO [RS:1;jenkins-hbase4:46199] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-18 02:15:21,467 INFO [RS:2;jenkins-hbase4:36883] regionserver.Replication(203): jenkins-hbase4.apache.org,36883,1689646520497 started 2023-07-18 02:15:21,467 INFO [RS:2;jenkins-hbase4:36883] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,36883,1689646520497, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:36883, sessionid=0x101763659290003 2023-07-18 02:15:21,467 INFO [RS:1;jenkins-hbase4:46199] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 02:15:21,467 DEBUG [RS:2;jenkins-hbase4:36883] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-18 02:15:21,467 DEBUG [RS:2;jenkins-hbase4:36883] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,36883,1689646520497 2023-07-18 02:15:21,467 DEBUG [RS:2;jenkins-hbase4:36883] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,36883,1689646520497' 2023-07-18 02:15:21,467 DEBUG [RS:2;jenkins-hbase4:36883] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-18 02:15:21,467 INFO [RS:1;jenkins-hbase4:46199] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 02:15:21,468 DEBUG [RS:2;jenkins-hbase4:36883] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-18 02:15:21,468 DEBUG [RS:2;jenkins-hbase4:36883] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-18 02:15:21,468 DEBUG [RS:2;jenkins-hbase4:36883] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-18 02:15:21,468 DEBUG [RS:2;jenkins-hbase4:36883] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,36883,1689646520497 2023-07-18 02:15:21,468 DEBUG [RS:2;jenkins-hbase4:36883] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,36883,1689646520497' 2023-07-18 02:15:21,468 DEBUG [RS:2;jenkins-hbase4:36883] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-18 02:15:21,468 DEBUG [RS:2;jenkins-hbase4:36883] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-18 02:15:21,469 DEBUG [RS:2;jenkins-hbase4:36883] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-18 02:15:21,469 INFO [RS:2;jenkins-hbase4:36883] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-18 02:15:21,469 INFO [RS:2;jenkins-hbase4:36883] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-18 02:15:21,469 DEBUG [RS:2;jenkins-hbase4:36883] zookeeper.ZKUtil(398): regionserver:36883-0x101763659290003, quorum=127.0.0.1:53987, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-18 02:15:21,469 INFO [RS:2;jenkins-hbase4:36883] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-18 02:15:21,469 INFO [RS:2;jenkins-hbase4:36883] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 02:15:21,469 INFO [RS:2;jenkins-hbase4:36883] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 02:15:21,469 INFO [RS:0;jenkins-hbase4:37933] regionserver.Replication(203): jenkins-hbase4.apache.org,37933,1689646520178 started 2023-07-18 02:15:21,469 INFO [RS:0;jenkins-hbase4:37933] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,37933,1689646520178, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:37933, sessionid=0x101763659290001 2023-07-18 02:15:21,470 DEBUG [RS:0;jenkins-hbase4:37933] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-18 02:15:21,470 DEBUG [RS:0;jenkins-hbase4:37933] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,37933,1689646520178 2023-07-18 02:15:21,470 DEBUG [RS:0;jenkins-hbase4:37933] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,37933,1689646520178' 2023-07-18 02:15:21,470 DEBUG [RS:0;jenkins-hbase4:37933] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-18 02:15:21,470 DEBUG [RS:0;jenkins-hbase4:37933] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-18 02:15:21,470 DEBUG [RS:0;jenkins-hbase4:37933] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-18 02:15:21,470 DEBUG [RS:0;jenkins-hbase4:37933] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-18 02:15:21,470 DEBUG [RS:0;jenkins-hbase4:37933] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,37933,1689646520178 2023-07-18 02:15:21,470 DEBUG [RS:0;jenkins-hbase4:37933] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,37933,1689646520178' 2023-07-18 02:15:21,471 DEBUG [RS:0;jenkins-hbase4:37933] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-18 02:15:21,471 DEBUG [RS:0;jenkins-hbase4:37933] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-18 02:15:21,471 DEBUG [RS:0;jenkins-hbase4:37933] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-18 02:15:21,471 INFO [RS:0;jenkins-hbase4:37933] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-18 02:15:21,471 INFO [RS:0;jenkins-hbase4:37933] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-18 02:15:21,471 DEBUG [RS:0;jenkins-hbase4:37933] zookeeper.ZKUtil(398): regionserver:37933-0x101763659290001, quorum=127.0.0.1:53987, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-18 02:15:21,472 INFO [RS:0;jenkins-hbase4:37933] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-18 02:15:21,472 INFO [RS:0;jenkins-hbase4:37933] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 02:15:21,472 INFO [RS:0;jenkins-hbase4:37933] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 02:15:21,487 DEBUG [jenkins-hbase4:43727] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-18 02:15:21,487 DEBUG [jenkins-hbase4:43727] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 02:15:21,487 DEBUG [jenkins-hbase4:43727] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 02:15:21,487 DEBUG [jenkins-hbase4:43727] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 02:15:21,487 DEBUG [jenkins-hbase4:43727] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 02:15:21,487 DEBUG [jenkins-hbase4:43727] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 02:15:21,488 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,37933,1689646520178, state=OPENING 2023-07-18 02:15:21,490 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-18 02:15:21,492 DEBUG [Listener at localhost/42081-EventThread] zookeeper.ZKWatcher(600): master:43727-0x101763659290000, quorum=127.0.0.1:53987, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 02:15:21,492 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,37933,1689646520178}] 2023-07-18 02:15:21,492 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-18 02:15:21,571 INFO [RS:1;jenkins-hbase4:46199] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C46199%2C1689646520335, suffix=, logDir=hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/WALs/jenkins-hbase4.apache.org,46199,1689646520335, archiveDir=hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/oldWALs, maxLogs=32 2023-07-18 02:15:21,571 INFO [RS:2;jenkins-hbase4:36883] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C36883%2C1689646520497, suffix=, logDir=hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/WALs/jenkins-hbase4.apache.org,36883,1689646520497, archiveDir=hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/oldWALs, maxLogs=32 2023-07-18 02:15:21,574 INFO [RS:0;jenkins-hbase4:37933] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C37933%2C1689646520178, suffix=, logDir=hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/WALs/jenkins-hbase4.apache.org,37933,1689646520178, archiveDir=hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/oldWALs, maxLogs=32 2023-07-18 02:15:21,582 WARN [ReadOnlyZKClient-127.0.0.1:53987@0x6fc90313] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-18 02:15:21,582 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,43727,1689646519894] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-18 02:15:21,584 INFO [RS-EventLoopGroup-9-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:58298, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-18 02:15:21,584 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=37933] ipc.CallRunner(144): callId: 0 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:58298 deadline: 1689646581584, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase4.apache.org,37933,1689646520178 2023-07-18 02:15:21,590 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44607,DS-13383ffd-37a5-4adb-9bed-3db45d050a8d,DISK] 2023-07-18 02:15:21,590 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44711,DS-2e88d1cf-ff15-4411-83b7-f35d82e686b5,DISK] 2023-07-18 02:15:21,603 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34755,DS-731b220f-5a89-40dd-8367-668303e01b62,DISK] 2023-07-18 02:15:21,608 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44607,DS-13383ffd-37a5-4adb-9bed-3db45d050a8d,DISK] 2023-07-18 02:15:21,609 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34755,DS-731b220f-5a89-40dd-8367-668303e01b62,DISK] 2023-07-18 02:15:21,609 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44711,DS-2e88d1cf-ff15-4411-83b7-f35d82e686b5,DISK] 2023-07-18 02:15:21,609 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44607,DS-13383ffd-37a5-4adb-9bed-3db45d050a8d,DISK] 2023-07-18 02:15:21,610 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34755,DS-731b220f-5a89-40dd-8367-668303e01b62,DISK] 2023-07-18 02:15:21,610 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44711,DS-2e88d1cf-ff15-4411-83b7-f35d82e686b5,DISK] 2023-07-18 02:15:21,611 INFO [RS:0;jenkins-hbase4:37933] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/WALs/jenkins-hbase4.apache.org,37933,1689646520178/jenkins-hbase4.apache.org%2C37933%2C1689646520178.1689646521575 2023-07-18 02:15:21,616 INFO [RS:2;jenkins-hbase4:36883] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/WALs/jenkins-hbase4.apache.org,36883,1689646520497/jenkins-hbase4.apache.org%2C36883%2C1689646520497.1689646521572 2023-07-18 02:15:21,617 DEBUG [RS:0;jenkins-hbase4:37933] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44607,DS-13383ffd-37a5-4adb-9bed-3db45d050a8d,DISK], DatanodeInfoWithStorage[127.0.0.1:34755,DS-731b220f-5a89-40dd-8367-668303e01b62,DISK], DatanodeInfoWithStorage[127.0.0.1:44711,DS-2e88d1cf-ff15-4411-83b7-f35d82e686b5,DISK]] 2023-07-18 02:15:21,617 INFO [RS:1;jenkins-hbase4:46199] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/WALs/jenkins-hbase4.apache.org,46199,1689646520335/jenkins-hbase4.apache.org%2C46199%2C1689646520335.1689646521572 2023-07-18 02:15:21,617 DEBUG [RS:2;jenkins-hbase4:36883] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44607,DS-13383ffd-37a5-4adb-9bed-3db45d050a8d,DISK], DatanodeInfoWithStorage[127.0.0.1:44711,DS-2e88d1cf-ff15-4411-83b7-f35d82e686b5,DISK], DatanodeInfoWithStorage[127.0.0.1:34755,DS-731b220f-5a89-40dd-8367-668303e01b62,DISK]] 2023-07-18 02:15:21,617 DEBUG [RS:1;jenkins-hbase4:46199] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44607,DS-13383ffd-37a5-4adb-9bed-3db45d050a8d,DISK], DatanodeInfoWithStorage[127.0.0.1:44711,DS-2e88d1cf-ff15-4411-83b7-f35d82e686b5,DISK], DatanodeInfoWithStorage[127.0.0.1:34755,DS-731b220f-5a89-40dd-8367-668303e01b62,DISK]] 2023-07-18 02:15:21,647 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,37933,1689646520178 2023-07-18 02:15:21,648 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-18 02:15:21,650 INFO [RS-EventLoopGroup-9-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:58304, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-18 02:15:21,654 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-18 02:15:21,654 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 02:15:21,656 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C37933%2C1689646520178.meta, suffix=.meta, logDir=hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/WALs/jenkins-hbase4.apache.org,37933,1689646520178, archiveDir=hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/oldWALs, maxLogs=32 2023-07-18 02:15:21,671 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34755,DS-731b220f-5a89-40dd-8367-668303e01b62,DISK] 2023-07-18 02:15:21,672 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44711,DS-2e88d1cf-ff15-4411-83b7-f35d82e686b5,DISK] 2023-07-18 02:15:21,672 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44607,DS-13383ffd-37a5-4adb-9bed-3db45d050a8d,DISK] 2023-07-18 02:15:21,674 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/WALs/jenkins-hbase4.apache.org,37933,1689646520178/jenkins-hbase4.apache.org%2C37933%2C1689646520178.meta.1689646521656.meta 2023-07-18 02:15:21,674 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34755,DS-731b220f-5a89-40dd-8367-668303e01b62,DISK], DatanodeInfoWithStorage[127.0.0.1:44607,DS-13383ffd-37a5-4adb-9bed-3db45d050a8d,DISK], DatanodeInfoWithStorage[127.0.0.1:44711,DS-2e88d1cf-ff15-4411-83b7-f35d82e686b5,DISK]] 2023-07-18 02:15:21,674 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-18 02:15:21,674 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-18 02:15:21,674 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-18 02:15:21,674 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-18 02:15:21,674 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-18 02:15:21,675 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 02:15:21,675 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-18 02:15:21,675 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-18 02:15:21,676 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-18 02:15:21,677 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/data/hbase/meta/1588230740/info 2023-07-18 02:15:21,677 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/data/hbase/meta/1588230740/info 2023-07-18 02:15:21,677 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-18 02:15:21,678 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 02:15:21,678 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-18 02:15:21,679 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/data/hbase/meta/1588230740/rep_barrier 2023-07-18 02:15:21,679 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/data/hbase/meta/1588230740/rep_barrier 2023-07-18 02:15:21,679 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-18 02:15:21,679 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 02:15:21,680 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-18 02:15:21,680 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/data/hbase/meta/1588230740/table 2023-07-18 02:15:21,680 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/data/hbase/meta/1588230740/table 2023-07-18 02:15:21,681 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-18 02:15:21,681 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 02:15:21,682 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/data/hbase/meta/1588230740 2023-07-18 02:15:21,683 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/data/hbase/meta/1588230740 2023-07-18 02:15:21,685 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-18 02:15:21,686 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-18 02:15:21,687 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11244055680, jitterRate=0.0471842885017395}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-18 02:15:21,687 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-18 02:15:21,688 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689646521647 2023-07-18 02:15:21,692 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-18 02:15:21,693 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-18 02:15:21,693 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,37933,1689646520178, state=OPEN 2023-07-18 02:15:21,694 DEBUG [Listener at localhost/42081-EventThread] zookeeper.ZKWatcher(600): master:43727-0x101763659290000, quorum=127.0.0.1:53987, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-18 02:15:21,694 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-18 02:15:21,696 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-18 02:15:21,696 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,37933,1689646520178 in 202 msec 2023-07-18 02:15:21,697 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-18 02:15:21,697 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 362 msec 2023-07-18 02:15:21,699 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 424 msec 2023-07-18 02:15:21,699 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689646521699, completionTime=-1 2023-07-18 02:15:21,699 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-18 02:15:21,699 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-18 02:15:21,702 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-18 02:15:21,702 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689646581702 2023-07-18 02:15:21,703 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689646641703 2023-07-18 02:15:21,703 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 3 msec 2023-07-18 02:15:21,708 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,43727,1689646519894-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 02:15:21,708 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,43727,1689646519894-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-18 02:15:21,708 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,43727,1689646519894-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-18 02:15:21,708 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:43727, period=300000, unit=MILLISECONDS is enabled. 2023-07-18 02:15:21,708 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-18 02:15:21,708 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-18 02:15:21,708 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-18 02:15:21,710 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-18 02:15:21,710 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-18 02:15:21,711 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 02:15:21,712 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-18 02:15:21,713 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/.tmp/data/hbase/namespace/1731d73df076ff213586f812958d935c 2023-07-18 02:15:21,713 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/.tmp/data/hbase/namespace/1731d73df076ff213586f812958d935c empty. 2023-07-18 02:15:21,714 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/.tmp/data/hbase/namespace/1731d73df076ff213586f812958d935c 2023-07-18 02:15:21,714 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-18 02:15:21,727 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-18 02:15:21,728 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 1731d73df076ff213586f812958d935c, NAME => 'hbase:namespace,,1689646521708.1731d73df076ff213586f812958d935c.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/.tmp 2023-07-18 02:15:21,741 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689646521708.1731d73df076ff213586f812958d935c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 02:15:21,741 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 1731d73df076ff213586f812958d935c, disabling compactions & flushes 2023-07-18 02:15:21,741 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689646521708.1731d73df076ff213586f812958d935c. 2023-07-18 02:15:21,741 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689646521708.1731d73df076ff213586f812958d935c. 2023-07-18 02:15:21,741 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689646521708.1731d73df076ff213586f812958d935c. after waiting 0 ms 2023-07-18 02:15:21,741 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689646521708.1731d73df076ff213586f812958d935c. 2023-07-18 02:15:21,741 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689646521708.1731d73df076ff213586f812958d935c. 2023-07-18 02:15:21,741 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 1731d73df076ff213586f812958d935c: 2023-07-18 02:15:21,744 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-18 02:15:21,745 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689646521708.1731d73df076ff213586f812958d935c.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689646521745"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689646521745"}]},"ts":"1689646521745"} 2023-07-18 02:15:21,748 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-18 02:15:21,748 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-18 02:15:21,749 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689646521748"}]},"ts":"1689646521748"} 2023-07-18 02:15:21,750 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-18 02:15:21,754 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 02:15:21,754 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 02:15:21,754 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 02:15:21,754 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 02:15:21,754 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 02:15:21,755 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=1731d73df076ff213586f812958d935c, ASSIGN}] 2023-07-18 02:15:21,756 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=1731d73df076ff213586f812958d935c, ASSIGN 2023-07-18 02:15:21,757 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=1731d73df076ff213586f812958d935c, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,37933,1689646520178; forceNewPlan=false, retain=false 2023-07-18 02:15:21,888 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,43727,1689646519894] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 02:15:21,890 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,43727,1689646519894] procedure2.ProcedureExecutor(1029): Stored pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-18 02:15:21,892 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 02:15:21,893 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-18 02:15:21,895 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/.tmp/data/hbase/rsgroup/56ce0d277bd1ede2fe41f5e8c85d5042 2023-07-18 02:15:21,895 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/.tmp/data/hbase/rsgroup/56ce0d277bd1ede2fe41f5e8c85d5042 empty. 2023-07-18 02:15:21,896 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/.tmp/data/hbase/rsgroup/56ce0d277bd1ede2fe41f5e8c85d5042 2023-07-18 02:15:21,896 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-18 02:15:21,907 INFO [jenkins-hbase4:43727] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-18 02:15:21,908 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=1731d73df076ff213586f812958d935c, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37933,1689646520178 2023-07-18 02:15:21,909 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689646521708.1731d73df076ff213586f812958d935c.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689646521908"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689646521908"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689646521908"}]},"ts":"1689646521908"} 2023-07-18 02:15:21,910 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=5, state=RUNNABLE; OpenRegionProcedure 1731d73df076ff213586f812958d935c, server=jenkins-hbase4.apache.org,37933,1689646520178}] 2023-07-18 02:15:21,914 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-18 02:15:21,916 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 56ce0d277bd1ede2fe41f5e8c85d5042, NAME => 'hbase:rsgroup,,1689646521888.56ce0d277bd1ede2fe41f5e8c85d5042.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/.tmp 2023-07-18 02:15:21,925 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689646521888.56ce0d277bd1ede2fe41f5e8c85d5042.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 02:15:21,925 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing 56ce0d277bd1ede2fe41f5e8c85d5042, disabling compactions & flushes 2023-07-18 02:15:21,925 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689646521888.56ce0d277bd1ede2fe41f5e8c85d5042. 2023-07-18 02:15:21,925 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689646521888.56ce0d277bd1ede2fe41f5e8c85d5042. 2023-07-18 02:15:21,925 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689646521888.56ce0d277bd1ede2fe41f5e8c85d5042. after waiting 0 ms 2023-07-18 02:15:21,925 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689646521888.56ce0d277bd1ede2fe41f5e8c85d5042. 2023-07-18 02:15:21,925 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689646521888.56ce0d277bd1ede2fe41f5e8c85d5042. 2023-07-18 02:15:21,925 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for 56ce0d277bd1ede2fe41f5e8c85d5042: 2023-07-18 02:15:21,927 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-18 02:15:21,928 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689646521888.56ce0d277bd1ede2fe41f5e8c85d5042.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689646521928"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689646521928"}]},"ts":"1689646521928"} 2023-07-18 02:15:21,929 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-18 02:15:21,930 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-18 02:15:21,930 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689646521930"}]},"ts":"1689646521930"} 2023-07-18 02:15:21,931 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-18 02:15:21,934 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 02:15:21,934 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 02:15:21,934 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 02:15:21,934 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 02:15:21,934 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 02:15:21,935 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=56ce0d277bd1ede2fe41f5e8c85d5042, ASSIGN}] 2023-07-18 02:15:21,935 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=8, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=56ce0d277bd1ede2fe41f5e8c85d5042, ASSIGN 2023-07-18 02:15:21,936 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=8, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=56ce0d277bd1ede2fe41f5e8c85d5042, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46199,1689646520335; forceNewPlan=false, retain=false 2023-07-18 02:15:22,068 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689646521708.1731d73df076ff213586f812958d935c. 2023-07-18 02:15:22,068 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1731d73df076ff213586f812958d935c, NAME => 'hbase:namespace,,1689646521708.1731d73df076ff213586f812958d935c.', STARTKEY => '', ENDKEY => ''} 2023-07-18 02:15:22,068 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 1731d73df076ff213586f812958d935c 2023-07-18 02:15:22,068 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689646521708.1731d73df076ff213586f812958d935c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 02:15:22,068 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1731d73df076ff213586f812958d935c 2023-07-18 02:15:22,069 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1731d73df076ff213586f812958d935c 2023-07-18 02:15:22,070 INFO [StoreOpener-1731d73df076ff213586f812958d935c-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1731d73df076ff213586f812958d935c 2023-07-18 02:15:22,071 DEBUG [StoreOpener-1731d73df076ff213586f812958d935c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/data/hbase/namespace/1731d73df076ff213586f812958d935c/info 2023-07-18 02:15:22,072 DEBUG [StoreOpener-1731d73df076ff213586f812958d935c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/data/hbase/namespace/1731d73df076ff213586f812958d935c/info 2023-07-18 02:15:22,072 INFO [StoreOpener-1731d73df076ff213586f812958d935c-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1731d73df076ff213586f812958d935c columnFamilyName info 2023-07-18 02:15:22,073 INFO [StoreOpener-1731d73df076ff213586f812958d935c-1] regionserver.HStore(310): Store=1731d73df076ff213586f812958d935c/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 02:15:22,073 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/data/hbase/namespace/1731d73df076ff213586f812958d935c 2023-07-18 02:15:22,074 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/data/hbase/namespace/1731d73df076ff213586f812958d935c 2023-07-18 02:15:22,076 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1731d73df076ff213586f812958d935c 2023-07-18 02:15:22,079 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/data/hbase/namespace/1731d73df076ff213586f812958d935c/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 02:15:22,079 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1731d73df076ff213586f812958d935c; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11348995200, jitterRate=0.05695754289627075}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 02:15:22,079 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1731d73df076ff213586f812958d935c: 2023-07-18 02:15:22,080 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689646521708.1731d73df076ff213586f812958d935c., pid=7, masterSystemTime=1689646522064 2023-07-18 02:15:22,082 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689646521708.1731d73df076ff213586f812958d935c. 2023-07-18 02:15:22,082 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689646521708.1731d73df076ff213586f812958d935c. 2023-07-18 02:15:22,083 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=1731d73df076ff213586f812958d935c, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,37933,1689646520178 2023-07-18 02:15:22,083 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689646521708.1731d73df076ff213586f812958d935c.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689646522083"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689646522083"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689646522083"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689646522083"}]},"ts":"1689646522083"} 2023-07-18 02:15:22,086 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=5 2023-07-18 02:15:22,086 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=5, state=SUCCESS; OpenRegionProcedure 1731d73df076ff213586f812958d935c, server=jenkins-hbase4.apache.org,37933,1689646520178 in 174 msec 2023-07-18 02:15:22,086 INFO [jenkins-hbase4:43727] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-18 02:15:22,087 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=8 updating hbase:meta row=56ce0d277bd1ede2fe41f5e8c85d5042, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46199,1689646520335 2023-07-18 02:15:22,088 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689646521888.56ce0d277bd1ede2fe41f5e8c85d5042.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689646522087"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689646522087"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689646522087"}]},"ts":"1689646522087"} 2023-07-18 02:15:22,089 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-07-18 02:15:22,089 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=1731d73df076ff213586f812958d935c, ASSIGN in 332 msec 2023-07-18 02:15:22,139 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=8, state=RUNNABLE; OpenRegionProcedure 56ce0d277bd1ede2fe41f5e8c85d5042, server=jenkins-hbase4.apache.org,46199,1689646520335}] 2023-07-18 02:15:22,140 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-18 02:15:22,143 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689646522142"}]},"ts":"1689646522142"} 2023-07-18 02:15:22,147 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-18 02:15:22,150 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-18 02:15:22,151 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 441 msec 2023-07-18 02:15:22,239 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:43727-0x101763659290000, quorum=127.0.0.1:53987, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-18 02:15:22,240 DEBUG [Listener at localhost/42081-EventThread] zookeeper.ZKWatcher(600): master:43727-0x101763659290000, quorum=127.0.0.1:53987, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-18 02:15:22,241 DEBUG [Listener at localhost/42081-EventThread] zookeeper.ZKWatcher(600): master:43727-0x101763659290000, quorum=127.0.0.1:53987, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 02:15:22,247 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-18 02:15:22,253 DEBUG [Listener at localhost/42081-EventThread] zookeeper.ZKWatcher(600): master:43727-0x101763659290000, quorum=127.0.0.1:53987, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-18 02:15:22,256 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 10 msec 2023-07-18 02:15:22,258 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-18 02:15:22,262 DEBUG [PEWorker-1] procedure.MasterProcedureScheduler(526): NAMESPACE 'hbase', shared lock count=1 2023-07-18 02:15:22,262 DEBUG [PEWorker-1] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-18 02:15:22,298 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,46199,1689646520335 2023-07-18 02:15:22,298 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-18 02:15:22,299 INFO [RS-EventLoopGroup-10-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:52852, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-18 02:15:22,304 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689646521888.56ce0d277bd1ede2fe41f5e8c85d5042. 2023-07-18 02:15:22,305 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 56ce0d277bd1ede2fe41f5e8c85d5042, NAME => 'hbase:rsgroup,,1689646521888.56ce0d277bd1ede2fe41f5e8c85d5042.', STARTKEY => '', ENDKEY => ''} 2023-07-18 02:15:22,305 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-18 02:15:22,305 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689646521888.56ce0d277bd1ede2fe41f5e8c85d5042. service=MultiRowMutationService 2023-07-18 02:15:22,305 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-18 02:15:22,305 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 56ce0d277bd1ede2fe41f5e8c85d5042 2023-07-18 02:15:22,305 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689646521888.56ce0d277bd1ede2fe41f5e8c85d5042.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 02:15:22,305 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 56ce0d277bd1ede2fe41f5e8c85d5042 2023-07-18 02:15:22,305 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 56ce0d277bd1ede2fe41f5e8c85d5042 2023-07-18 02:15:22,306 INFO [StoreOpener-56ce0d277bd1ede2fe41f5e8c85d5042-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 56ce0d277bd1ede2fe41f5e8c85d5042 2023-07-18 02:15:22,308 DEBUG [StoreOpener-56ce0d277bd1ede2fe41f5e8c85d5042-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/data/hbase/rsgroup/56ce0d277bd1ede2fe41f5e8c85d5042/m 2023-07-18 02:15:22,308 DEBUG [StoreOpener-56ce0d277bd1ede2fe41f5e8c85d5042-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/data/hbase/rsgroup/56ce0d277bd1ede2fe41f5e8c85d5042/m 2023-07-18 02:15:22,308 INFO [StoreOpener-56ce0d277bd1ede2fe41f5e8c85d5042-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 56ce0d277bd1ede2fe41f5e8c85d5042 columnFamilyName m 2023-07-18 02:15:22,309 INFO [StoreOpener-56ce0d277bd1ede2fe41f5e8c85d5042-1] regionserver.HStore(310): Store=56ce0d277bd1ede2fe41f5e8c85d5042/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 02:15:22,309 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/data/hbase/rsgroup/56ce0d277bd1ede2fe41f5e8c85d5042 2023-07-18 02:15:22,310 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/data/hbase/rsgroup/56ce0d277bd1ede2fe41f5e8c85d5042 2023-07-18 02:15:22,312 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 56ce0d277bd1ede2fe41f5e8c85d5042 2023-07-18 02:15:22,315 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/data/hbase/rsgroup/56ce0d277bd1ede2fe41f5e8c85d5042/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 02:15:22,315 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 56ce0d277bd1ede2fe41f5e8c85d5042; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@7446d500, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 02:15:22,316 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 56ce0d277bd1ede2fe41f5e8c85d5042: 2023-07-18 02:15:22,316 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689646521888.56ce0d277bd1ede2fe41f5e8c85d5042., pid=9, masterSystemTime=1689646522297 2023-07-18 02:15:22,319 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689646521888.56ce0d277bd1ede2fe41f5e8c85d5042. 2023-07-18 02:15:22,320 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689646521888.56ce0d277bd1ede2fe41f5e8c85d5042. 2023-07-18 02:15:22,320 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=8 updating hbase:meta row=56ce0d277bd1ede2fe41f5e8c85d5042, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46199,1689646520335 2023-07-18 02:15:22,320 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689646521888.56ce0d277bd1ede2fe41f5e8c85d5042.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689646522320"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689646522320"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689646522320"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689646522320"}]},"ts":"1689646522320"} 2023-07-18 02:15:22,322 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=8 2023-07-18 02:15:22,323 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=8, state=SUCCESS; OpenRegionProcedure 56ce0d277bd1ede2fe41f5e8c85d5042, server=jenkins-hbase4.apache.org,46199,1689646520335 in 182 msec 2023-07-18 02:15:22,324 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=6 2023-07-18 02:15:22,324 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=6, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=56ce0d277bd1ede2fe41f5e8c85d5042, ASSIGN in 387 msec 2023-07-18 02:15:22,336 DEBUG [Listener at localhost/42081-EventThread] zookeeper.ZKWatcher(600): master:43727-0x101763659290000, quorum=127.0.0.1:53987, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-18 02:15:22,345 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 86 msec 2023-07-18 02:15:22,346 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-18 02:15:22,346 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689646522346"}]},"ts":"1689646522346"} 2023-07-18 02:15:22,347 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-18 02:15:22,354 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-18 02:15:22,355 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=6, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 466 msec 2023-07-18 02:15:22,359 DEBUG [Listener at localhost/42081-EventThread] zookeeper.ZKWatcher(600): master:43727-0x101763659290000, quorum=127.0.0.1:53987, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-18 02:15:22,361 DEBUG [Listener at localhost/42081-EventThread] zookeeper.ZKWatcher(600): master:43727-0x101763659290000, quorum=127.0.0.1:53987, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-18 02:15:22,361 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.678sec 2023-07-18 02:15:22,361 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(103): Quota table not found. Creating... 2023-07-18 02:15:22,361 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 02:15:22,362 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:quota 2023-07-18 02:15:22,362 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(107): Initializing quota support 2023-07-18 02:15:22,364 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 02:15:22,365 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-18 02:15:22,366 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceStateManager(59): Namespace State Manager started. 2023-07-18 02:15:22,366 DEBUG [Listener at localhost/42081] zookeeper.ReadOnlyZKClient(139): Connect 0x06a1b9d4 to 127.0.0.1:53987 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 02:15:22,366 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/.tmp/data/hbase/quota/f07eda6b0430276b1c086c402e90a1e5 2023-07-18 02:15:22,367 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/.tmp/data/hbase/quota/f07eda6b0430276b1c086c402e90a1e5 empty. 2023-07-18 02:15:22,368 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/.tmp/data/hbase/quota/f07eda6b0430276b1c086c402e90a1e5 2023-07-18 02:15:22,369 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived hbase:quota regions 2023-07-18 02:15:22,373 DEBUG [Listener at localhost/42081] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6b7167b7, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 02:15:22,375 DEBUG [hconnection-0x1c97e0aa-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-18 02:15:22,375 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceStateManager(222): Finished updating state of 2 namespaces. 2023-07-18 02:15:22,375 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceAuditor(50): NamespaceAuditor started. 2023-07-18 02:15:22,378 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 02:15:22,379 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-18 02:15:22,379 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-18 02:15:22,379 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-18 02:15:22,379 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,43727,1689646519894-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-18 02:15:22,379 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,43727,1689646519894-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-18 02:15:22,386 INFO [RS-EventLoopGroup-9-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:58310, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-18 02:15:22,387 INFO [Listener at localhost/42081] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,43727,1689646519894 2023-07-18 02:15:22,387 INFO [Listener at localhost/42081] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 02:15:22,390 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-18 02:15:22,399 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/.tmp/data/hbase/quota/.tabledesc/.tableinfo.0000000001 2023-07-18 02:15:22,403 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(7675): creating {ENCODED => f07eda6b0430276b1c086c402e90a1e5, NAME => 'hbase:quota,,1689646522361.f07eda6b0430276b1c086c402e90a1e5.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/.tmp 2023-07-18 02:15:22,417 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(866): Instantiated hbase:quota,,1689646522361.f07eda6b0430276b1c086c402e90a1e5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 02:15:22,417 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1604): Closing f07eda6b0430276b1c086c402e90a1e5, disabling compactions & flushes 2023-07-18 02:15:22,418 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1626): Closing region hbase:quota,,1689646522361.f07eda6b0430276b1c086c402e90a1e5. 2023-07-18 02:15:22,418 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1689646522361.f07eda6b0430276b1c086c402e90a1e5. 2023-07-18 02:15:22,418 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1689646522361.f07eda6b0430276b1c086c402e90a1e5. after waiting 0 ms 2023-07-18 02:15:22,418 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1689646522361.f07eda6b0430276b1c086c402e90a1e5. 2023-07-18 02:15:22,418 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1838): Closed hbase:quota,,1689646522361.f07eda6b0430276b1c086c402e90a1e5. 2023-07-18 02:15:22,418 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1558): Region close journal for f07eda6b0430276b1c086c402e90a1e5: 2023-07-18 02:15:22,420 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ADD_TO_META 2023-07-18 02:15:22,421 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:quota,,1689646522361.f07eda6b0430276b1c086c402e90a1e5.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689646522421"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689646522421"}]},"ts":"1689646522421"} 2023-07-18 02:15:22,422 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-18 02:15:22,423 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-18 02:15:22,423 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689646522423"}]},"ts":"1689646522423"} 2023-07-18 02:15:22,424 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLING in hbase:meta 2023-07-18 02:15:22,429 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 02:15:22,429 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 02:15:22,429 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 02:15:22,429 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 02:15:22,429 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 02:15:22,429 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=f07eda6b0430276b1c086c402e90a1e5, ASSIGN}] 2023-07-18 02:15:22,430 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=f07eda6b0430276b1c086c402e90a1e5, ASSIGN 2023-07-18 02:15:22,432 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:quota, region=f07eda6b0430276b1c086c402e90a1e5, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,36883,1689646520497; forceNewPlan=false, retain=false 2023-07-18 02:15:22,443 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,43727,1689646519894] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-18 02:15:22,445 INFO [RS-EventLoopGroup-10-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:52860, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-18 02:15:22,448 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,43727,1689646519894] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-18 02:15:22,448 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,43727,1689646519894] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-18 02:15:22,453 DEBUG [Listener at localhost/42081-EventThread] zookeeper.ZKWatcher(600): master:43727-0x101763659290000, quorum=127.0.0.1:53987, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 02:15:22,453 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,43727,1689646519894] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:22,455 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,43727,1689646519894] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-18 02:15:22,457 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,43727,1689646519894] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-18 02:15:22,490 DEBUG [Listener at localhost/42081] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-18 02:15:22,492 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:35234, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-18 02:15:22,495 DEBUG [Listener at localhost/42081-EventThread] zookeeper.ZKWatcher(600): master:43727-0x101763659290000, quorum=127.0.0.1:53987, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-18 02:15:22,495 DEBUG [Listener at localhost/42081-EventThread] zookeeper.ZKWatcher(600): master:43727-0x101763659290000, quorum=127.0.0.1:53987, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 02:15:22,496 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43727] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-18 02:15:22,496 DEBUG [Listener at localhost/42081] zookeeper.ReadOnlyZKClient(139): Connect 0x34a7cc7e to 127.0.0.1:53987 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 02:15:22,501 DEBUG [Listener at localhost/42081] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1796f8f0, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 02:15:22,502 INFO [Listener at localhost/42081] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:53987 2023-07-18 02:15:22,505 DEBUG [Listener at localhost/42081-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:53987, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 02:15:22,506 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x10176365929000a connected 2023-07-18 02:15:22,508 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43727] master.HMaster$15(3014): Client=jenkins//172.31.14.131 creating {NAME => 'np1', hbase.namespace.quota.maxregions => '5', hbase.namespace.quota.maxtables => '2'} 2023-07-18 02:15:22,510 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43727] procedure2.ProcedureExecutor(1029): Stored pid=14, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=np1 2023-07-18 02:15:22,515 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43727] master.MasterRpcServices(1230): Checking to see if procedure is done pid=14 2023-07-18 02:15:22,519 DEBUG [Listener at localhost/42081-EventThread] zookeeper.ZKWatcher(600): master:43727-0x101763659290000, quorum=127.0.0.1:53987, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-18 02:15:22,522 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=14, state=SUCCESS; CreateNamespaceProcedure, namespace=np1 in 12 msec 2023-07-18 02:15:22,583 INFO [jenkins-hbase4:43727] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-18 02:15:22,584 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=f07eda6b0430276b1c086c402e90a1e5, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,36883,1689646520497 2023-07-18 02:15:22,584 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:quota,,1689646522361.f07eda6b0430276b1c086c402e90a1e5.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689646522584"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689646522584"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689646522584"}]},"ts":"1689646522584"} 2023-07-18 02:15:22,586 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=15, ppid=13, state=RUNNABLE; OpenRegionProcedure f07eda6b0430276b1c086c402e90a1e5, server=jenkins-hbase4.apache.org,36883,1689646520497}] 2023-07-18 02:15:22,617 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43727] master.MasterRpcServices(1230): Checking to see if procedure is done pid=14 2023-07-18 02:15:22,621 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43727] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'np1:table1', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 02:15:22,623 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43727] procedure2.ProcedureExecutor(1029): Stored pid=16, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=np1:table1 2023-07-18 02:15:22,624 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=16, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 02:15:22,624 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43727] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "np1" qualifier: "table1" procId is: 16 2023-07-18 02:15:22,625 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43727] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-18 02:15:22,626 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:22,626 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-18 02:15:22,630 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=16, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-18 02:15:22,631 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/.tmp/data/np1/table1/e46be0c47d23103d8592021774029bde 2023-07-18 02:15:22,632 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/.tmp/data/np1/table1/e46be0c47d23103d8592021774029bde empty. 2023-07-18 02:15:22,632 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/.tmp/data/np1/table1/e46be0c47d23103d8592021774029bde 2023-07-18 02:15:22,632 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived np1:table1 regions 2023-07-18 02:15:22,645 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/.tmp/data/np1/table1/.tabledesc/.tableinfo.0000000001 2023-07-18 02:15:22,646 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(7675): creating {ENCODED => e46be0c47d23103d8592021774029bde, NAME => 'np1:table1,,1689646522621.e46be0c47d23103d8592021774029bde.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='np1:table1', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/.tmp 2023-07-18 02:15:22,655 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(866): Instantiated np1:table1,,1689646522621.e46be0c47d23103d8592021774029bde.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 02:15:22,655 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1604): Closing e46be0c47d23103d8592021774029bde, disabling compactions & flushes 2023-07-18 02:15:22,655 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1626): Closing region np1:table1,,1689646522621.e46be0c47d23103d8592021774029bde. 2023-07-18 02:15:22,655 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on np1:table1,,1689646522621.e46be0c47d23103d8592021774029bde. 2023-07-18 02:15:22,655 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1714): Acquired close lock on np1:table1,,1689646522621.e46be0c47d23103d8592021774029bde. after waiting 0 ms 2023-07-18 02:15:22,655 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1724): Updates disabled for region np1:table1,,1689646522621.e46be0c47d23103d8592021774029bde. 2023-07-18 02:15:22,655 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1838): Closed np1:table1,,1689646522621.e46be0c47d23103d8592021774029bde. 2023-07-18 02:15:22,655 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1558): Region close journal for e46be0c47d23103d8592021774029bde: 2023-07-18 02:15:22,658 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=16, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_ADD_TO_META 2023-07-18 02:15:22,659 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"np1:table1,,1689646522621.e46be0c47d23103d8592021774029bde.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689646522659"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689646522659"}]},"ts":"1689646522659"} 2023-07-18 02:15:22,660 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-18 02:15:22,661 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=16, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-18 02:15:22,661 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689646522661"}]},"ts":"1689646522661"} 2023-07-18 02:15:22,662 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=ENABLING in hbase:meta 2023-07-18 02:15:22,666 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 02:15:22,666 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 02:15:22,666 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 02:15:22,666 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 02:15:22,666 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 02:15:22,667 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=np1:table1, region=e46be0c47d23103d8592021774029bde, ASSIGN}] 2023-07-18 02:15:22,667 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=np1:table1, region=e46be0c47d23103d8592021774029bde, ASSIGN 2023-07-18 02:15:22,668 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=np1:table1, region=e46be0c47d23103d8592021774029bde, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,36883,1689646520497; forceNewPlan=false, retain=false 2023-07-18 02:15:22,726 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43727] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-18 02:15:22,739 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,36883,1689646520497 2023-07-18 02:15:22,739 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-18 02:15:22,740 INFO [RS-EventLoopGroup-11-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:54614, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-18 02:15:22,746 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:quota,,1689646522361.f07eda6b0430276b1c086c402e90a1e5. 2023-07-18 02:15:22,746 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f07eda6b0430276b1c086c402e90a1e5, NAME => 'hbase:quota,,1689646522361.f07eda6b0430276b1c086c402e90a1e5.', STARTKEY => '', ENDKEY => ''} 2023-07-18 02:15:22,747 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table quota f07eda6b0430276b1c086c402e90a1e5 2023-07-18 02:15:22,747 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:quota,,1689646522361.f07eda6b0430276b1c086c402e90a1e5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 02:15:22,747 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for f07eda6b0430276b1c086c402e90a1e5 2023-07-18 02:15:22,747 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for f07eda6b0430276b1c086c402e90a1e5 2023-07-18 02:15:22,748 INFO [StoreOpener-f07eda6b0430276b1c086c402e90a1e5-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family q of region f07eda6b0430276b1c086c402e90a1e5 2023-07-18 02:15:22,750 DEBUG [StoreOpener-f07eda6b0430276b1c086c402e90a1e5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/data/hbase/quota/f07eda6b0430276b1c086c402e90a1e5/q 2023-07-18 02:15:22,750 DEBUG [StoreOpener-f07eda6b0430276b1c086c402e90a1e5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/data/hbase/quota/f07eda6b0430276b1c086c402e90a1e5/q 2023-07-18 02:15:22,750 INFO [StoreOpener-f07eda6b0430276b1c086c402e90a1e5-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f07eda6b0430276b1c086c402e90a1e5 columnFamilyName q 2023-07-18 02:15:22,751 INFO [StoreOpener-f07eda6b0430276b1c086c402e90a1e5-1] regionserver.HStore(310): Store=f07eda6b0430276b1c086c402e90a1e5/q, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 02:15:22,751 INFO [StoreOpener-f07eda6b0430276b1c086c402e90a1e5-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family u of region f07eda6b0430276b1c086c402e90a1e5 2023-07-18 02:15:22,752 DEBUG [StoreOpener-f07eda6b0430276b1c086c402e90a1e5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/data/hbase/quota/f07eda6b0430276b1c086c402e90a1e5/u 2023-07-18 02:15:22,752 DEBUG [StoreOpener-f07eda6b0430276b1c086c402e90a1e5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/data/hbase/quota/f07eda6b0430276b1c086c402e90a1e5/u 2023-07-18 02:15:22,752 INFO [StoreOpener-f07eda6b0430276b1c086c402e90a1e5-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f07eda6b0430276b1c086c402e90a1e5 columnFamilyName u 2023-07-18 02:15:22,753 INFO [StoreOpener-f07eda6b0430276b1c086c402e90a1e5-1] regionserver.HStore(310): Store=f07eda6b0430276b1c086c402e90a1e5/u, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 02:15:22,754 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/data/hbase/quota/f07eda6b0430276b1c086c402e90a1e5 2023-07-18 02:15:22,754 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/data/hbase/quota/f07eda6b0430276b1c086c402e90a1e5 2023-07-18 02:15:22,756 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:quota descriptor;using region.getMemStoreFlushHeapSize/# of families (64.0 M)) instead. 2023-07-18 02:15:22,757 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for f07eda6b0430276b1c086c402e90a1e5 2023-07-18 02:15:22,760 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/data/hbase/quota/f07eda6b0430276b1c086c402e90a1e5/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 02:15:22,760 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened f07eda6b0430276b1c086c402e90a1e5; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11893018400, jitterRate=0.10762365162372589}}}, FlushLargeStoresPolicy{flushSizeLowerBound=67108864} 2023-07-18 02:15:22,760 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for f07eda6b0430276b1c086c402e90a1e5: 2023-07-18 02:15:22,761 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:quota,,1689646522361.f07eda6b0430276b1c086c402e90a1e5., pid=15, masterSystemTime=1689646522739 2023-07-18 02:15:22,764 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:quota,,1689646522361.f07eda6b0430276b1c086c402e90a1e5. 2023-07-18 02:15:22,764 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:quota,,1689646522361.f07eda6b0430276b1c086c402e90a1e5. 2023-07-18 02:15:22,765 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=f07eda6b0430276b1c086c402e90a1e5, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,36883,1689646520497 2023-07-18 02:15:22,765 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:quota,,1689646522361.f07eda6b0430276b1c086c402e90a1e5.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689646522765"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689646522765"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689646522765"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689646522765"}]},"ts":"1689646522765"} 2023-07-18 02:15:22,768 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=15, resume processing ppid=13 2023-07-18 02:15:22,768 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=15, ppid=13, state=SUCCESS; OpenRegionProcedure f07eda6b0430276b1c086c402e90a1e5, server=jenkins-hbase4.apache.org,36883,1689646520497 in 180 msec 2023-07-18 02:15:22,769 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-18 02:15:22,769 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=hbase:quota, region=f07eda6b0430276b1c086c402e90a1e5, ASSIGN in 339 msec 2023-07-18 02:15:22,770 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-18 02:15:22,770 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689646522770"}]},"ts":"1689646522770"} 2023-07-18 02:15:22,771 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLED in hbase:meta 2023-07-18 02:15:22,773 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_POST_OPERATION 2023-07-18 02:15:22,774 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=hbase:quota in 411 msec 2023-07-18 02:15:22,818 INFO [jenkins-hbase4:43727] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-18 02:15:22,819 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=e46be0c47d23103d8592021774029bde, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,36883,1689646520497 2023-07-18 02:15:22,820 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"np1:table1,,1689646522621.e46be0c47d23103d8592021774029bde.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689646522819"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689646522819"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689646522819"}]},"ts":"1689646522819"} 2023-07-18 02:15:22,824 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=17, state=RUNNABLE; OpenRegionProcedure e46be0c47d23103d8592021774029bde, server=jenkins-hbase4.apache.org,36883,1689646520497}] 2023-07-18 02:15:22,927 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43727] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-18 02:15:22,979 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open np1:table1,,1689646522621.e46be0c47d23103d8592021774029bde. 2023-07-18 02:15:22,979 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e46be0c47d23103d8592021774029bde, NAME => 'np1:table1,,1689646522621.e46be0c47d23103d8592021774029bde.', STARTKEY => '', ENDKEY => ''} 2023-07-18 02:15:22,979 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table table1 e46be0c47d23103d8592021774029bde 2023-07-18 02:15:22,980 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated np1:table1,,1689646522621.e46be0c47d23103d8592021774029bde.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 02:15:22,980 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for e46be0c47d23103d8592021774029bde 2023-07-18 02:15:22,980 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for e46be0c47d23103d8592021774029bde 2023-07-18 02:15:22,981 INFO [StoreOpener-e46be0c47d23103d8592021774029bde-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family fam1 of region e46be0c47d23103d8592021774029bde 2023-07-18 02:15:22,982 DEBUG [StoreOpener-e46be0c47d23103d8592021774029bde-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/data/np1/table1/e46be0c47d23103d8592021774029bde/fam1 2023-07-18 02:15:22,982 DEBUG [StoreOpener-e46be0c47d23103d8592021774029bde-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/data/np1/table1/e46be0c47d23103d8592021774029bde/fam1 2023-07-18 02:15:22,983 INFO [StoreOpener-e46be0c47d23103d8592021774029bde-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e46be0c47d23103d8592021774029bde columnFamilyName fam1 2023-07-18 02:15:22,983 INFO [StoreOpener-e46be0c47d23103d8592021774029bde-1] regionserver.HStore(310): Store=e46be0c47d23103d8592021774029bde/fam1, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 02:15:22,984 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/data/np1/table1/e46be0c47d23103d8592021774029bde 2023-07-18 02:15:22,984 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/data/np1/table1/e46be0c47d23103d8592021774029bde 2023-07-18 02:15:22,987 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for e46be0c47d23103d8592021774029bde 2023-07-18 02:15:22,989 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/data/np1/table1/e46be0c47d23103d8592021774029bde/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 02:15:22,989 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened e46be0c47d23103d8592021774029bde; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11028061600, jitterRate=0.02706827223300934}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 02:15:22,989 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for e46be0c47d23103d8592021774029bde: 2023-07-18 02:15:22,990 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for np1:table1,,1689646522621.e46be0c47d23103d8592021774029bde., pid=18, masterSystemTime=1689646522975 2023-07-18 02:15:22,991 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for np1:table1,,1689646522621.e46be0c47d23103d8592021774029bde. 2023-07-18 02:15:22,991 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened np1:table1,,1689646522621.e46be0c47d23103d8592021774029bde. 2023-07-18 02:15:22,991 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=e46be0c47d23103d8592021774029bde, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,36883,1689646520497 2023-07-18 02:15:22,992 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"np1:table1,,1689646522621.e46be0c47d23103d8592021774029bde.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689646522991"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689646522991"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689646522991"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689646522991"}]},"ts":"1689646522991"} 2023-07-18 02:15:22,995 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=17 2023-07-18 02:15:22,995 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=17, state=SUCCESS; OpenRegionProcedure e46be0c47d23103d8592021774029bde, server=jenkins-hbase4.apache.org,36883,1689646520497 in 169 msec 2023-07-18 02:15:22,997 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=16 2023-07-18 02:15:22,997 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=16, state=SUCCESS; TransitRegionStateProcedure table=np1:table1, region=e46be0c47d23103d8592021774029bde, ASSIGN in 329 msec 2023-07-18 02:15:22,997 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=16, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-18 02:15:22,997 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689646522997"}]},"ts":"1689646522997"} 2023-07-18 02:15:22,998 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=ENABLED in hbase:meta 2023-07-18 02:15:23,000 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=16, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_POST_OPERATION 2023-07-18 02:15:23,002 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=16, state=SUCCESS; CreateTableProcedure table=np1:table1 in 380 msec 2023-07-18 02:15:23,228 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43727] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-18 02:15:23,229 INFO [Listener at localhost/42081] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: np1:table1, procId: 16 completed 2023-07-18 02:15:23,230 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43727] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'np1:table2', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 02:15:23,231 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43727] procedure2.ProcedureExecutor(1029): Stored pid=19, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=np1:table2 2023-07-18 02:15:23,233 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=19, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=np1:table2 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 02:15:23,233 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43727] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "np1" qualifier: "table2" procId is: 19 2023-07-18 02:15:23,234 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43727] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-18 02:15:23,249 DEBUG [PEWorker-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-18 02:15:23,250 INFO [RS-EventLoopGroup-11-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:54630, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-18 02:15:23,252 INFO [PEWorker-2] procedure2.ProcedureExecutor(1528): Rolled back pid=19, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.quotas.QuotaExceededException via master-create-table:org.apache.hadoop.hbase.quotas.QuotaExceededException: The table np1:table2 is not allowed to have 6 regions. The total number of regions permitted is only 5, while current region count is 1. This may be transient, please retry later if there are any ongoing split operations in the namespace.; CreateTableProcedure table=np1:table2 exec-time=22 msec 2023-07-18 02:15:23,335 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43727] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-18 02:15:23,337 INFO [Listener at localhost/42081] client.HBaseAdmin$TableFuture(3548): Operation: CREATE, Table Name: np1:table2, procId: 19 failed with The table np1:table2 is not allowed to have 6 regions. The total number of regions permitted is only 5, while current region count is 1. This may be transient, please retry later if there are any ongoing split operations in the namespace. 2023-07-18 02:15:23,338 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43727] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:23,339 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43727] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:23,340 INFO [Listener at localhost/42081] client.HBaseAdmin$15(890): Started disable of np1:table1 2023-07-18 02:15:23,340 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43727] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable np1:table1 2023-07-18 02:15:23,341 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43727] procedure2.ProcedureExecutor(1029): Stored pid=20, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=np1:table1 2023-07-18 02:15:23,343 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43727] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-18 02:15:23,343 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689646523343"}]},"ts":"1689646523343"} 2023-07-18 02:15:23,344 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=DISABLING in hbase:meta 2023-07-18 02:15:23,346 INFO [PEWorker-4] procedure.DisableTableProcedure(293): Set np1:table1 to state=DISABLING 2023-07-18 02:15:23,347 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=np1:table1, region=e46be0c47d23103d8592021774029bde, UNASSIGN}] 2023-07-18 02:15:23,348 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=np1:table1, region=e46be0c47d23103d8592021774029bde, UNASSIGN 2023-07-18 02:15:23,348 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=e46be0c47d23103d8592021774029bde, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,36883,1689646520497 2023-07-18 02:15:23,348 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"np1:table1,,1689646522621.e46be0c47d23103d8592021774029bde.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689646523348"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689646523348"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689646523348"}]},"ts":"1689646523348"} 2023-07-18 02:15:23,349 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=22, ppid=21, state=RUNNABLE; CloseRegionProcedure e46be0c47d23103d8592021774029bde, server=jenkins-hbase4.apache.org,36883,1689646520497}] 2023-07-18 02:15:23,444 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43727] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-18 02:15:23,501 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close e46be0c47d23103d8592021774029bde 2023-07-18 02:15:23,502 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing e46be0c47d23103d8592021774029bde, disabling compactions & flushes 2023-07-18 02:15:23,503 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region np1:table1,,1689646522621.e46be0c47d23103d8592021774029bde. 2023-07-18 02:15:23,503 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on np1:table1,,1689646522621.e46be0c47d23103d8592021774029bde. 2023-07-18 02:15:23,503 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on np1:table1,,1689646522621.e46be0c47d23103d8592021774029bde. after waiting 0 ms 2023-07-18 02:15:23,503 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region np1:table1,,1689646522621.e46be0c47d23103d8592021774029bde. 2023-07-18 02:15:23,506 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/data/np1/table1/e46be0c47d23103d8592021774029bde/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 02:15:23,506 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed np1:table1,,1689646522621.e46be0c47d23103d8592021774029bde. 2023-07-18 02:15:23,507 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for e46be0c47d23103d8592021774029bde: 2023-07-18 02:15:23,508 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed e46be0c47d23103d8592021774029bde 2023-07-18 02:15:23,508 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=e46be0c47d23103d8592021774029bde, regionState=CLOSED 2023-07-18 02:15:23,508 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"np1:table1,,1689646522621.e46be0c47d23103d8592021774029bde.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689646523508"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689646523508"}]},"ts":"1689646523508"} 2023-07-18 02:15:23,511 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=22, resume processing ppid=21 2023-07-18 02:15:23,511 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=22, ppid=21, state=SUCCESS; CloseRegionProcedure e46be0c47d23103d8592021774029bde, server=jenkins-hbase4.apache.org,36883,1689646520497 in 160 msec 2023-07-18 02:15:23,512 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=21, resume processing ppid=20 2023-07-18 02:15:23,512 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=21, ppid=20, state=SUCCESS; TransitRegionStateProcedure table=np1:table1, region=e46be0c47d23103d8592021774029bde, UNASSIGN in 164 msec 2023-07-18 02:15:23,513 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689646523513"}]},"ts":"1689646523513"} 2023-07-18 02:15:23,514 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=DISABLED in hbase:meta 2023-07-18 02:15:23,515 INFO [PEWorker-4] procedure.DisableTableProcedure(305): Set np1:table1 to state=DISABLED 2023-07-18 02:15:23,518 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=20, state=SUCCESS; DisableTableProcedure table=np1:table1 in 177 msec 2023-07-18 02:15:23,645 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43727] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-18 02:15:23,645 INFO [Listener at localhost/42081] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: np1:table1, procId: 20 completed 2023-07-18 02:15:23,646 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43727] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete np1:table1 2023-07-18 02:15:23,647 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43727] procedure2.ProcedureExecutor(1029): Stored pid=23, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=np1:table1 2023-07-18 02:15:23,648 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=23, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=np1:table1 2023-07-18 02:15:23,648 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43727] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'np1:table1' from rsgroup 'default' 2023-07-18 02:15:23,649 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=23, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=np1:table1 2023-07-18 02:15:23,650 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43727] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:23,651 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43727] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-18 02:15:23,653 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/.tmp/data/np1/table1/e46be0c47d23103d8592021774029bde 2023-07-18 02:15:23,655 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/.tmp/data/np1/table1/e46be0c47d23103d8592021774029bde/fam1, FileablePath, hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/.tmp/data/np1/table1/e46be0c47d23103d8592021774029bde/recovered.edits] 2023-07-18 02:15:23,655 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43727] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-18 02:15:23,659 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/.tmp/data/np1/table1/e46be0c47d23103d8592021774029bde/recovered.edits/4.seqid to hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/archive/data/np1/table1/e46be0c47d23103d8592021774029bde/recovered.edits/4.seqid 2023-07-18 02:15:23,660 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/.tmp/data/np1/table1/e46be0c47d23103d8592021774029bde 2023-07-18 02:15:23,660 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived np1:table1 regions 2023-07-18 02:15:23,662 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=23, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=np1:table1 2023-07-18 02:15:23,663 WARN [PEWorker-1] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of np1:table1 from hbase:meta 2023-07-18 02:15:23,665 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(421): Removing 'np1:table1' descriptor. 2023-07-18 02:15:23,666 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=23, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=np1:table1 2023-07-18 02:15:23,666 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(411): Removing 'np1:table1' from region states. 2023-07-18 02:15:23,666 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"np1:table1,,1689646522621.e46be0c47d23103d8592021774029bde.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689646523666"}]},"ts":"9223372036854775807"} 2023-07-18 02:15:23,668 INFO [PEWorker-1] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-18 02:15:23,668 DEBUG [PEWorker-1] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => e46be0c47d23103d8592021774029bde, NAME => 'np1:table1,,1689646522621.e46be0c47d23103d8592021774029bde.', STARTKEY => '', ENDKEY => ''}] 2023-07-18 02:15:23,668 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(415): Marking 'np1:table1' as deleted. 2023-07-18 02:15:23,668 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689646523668"}]},"ts":"9223372036854775807"} 2023-07-18 02:15:23,669 INFO [PEWorker-1] hbase.MetaTableAccessor(1658): Deleted table np1:table1 state from META 2023-07-18 02:15:23,672 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(130): Finished pid=23, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=np1:table1 2023-07-18 02:15:23,673 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=23, state=SUCCESS; DeleteTableProcedure table=np1:table1 in 26 msec 2023-07-18 02:15:23,756 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43727] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-18 02:15:23,756 INFO [Listener at localhost/42081] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: np1:table1, procId: 23 completed 2023-07-18 02:15:23,761 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43727] master.HMaster$17(3086): Client=jenkins//172.31.14.131 delete np1 2023-07-18 02:15:23,768 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43727] procedure2.ProcedureExecutor(1029): Stored pid=24, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=np1 2023-07-18 02:15:23,769 INFO [PEWorker-5] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-18 02:15:23,772 INFO [PEWorker-5] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-18 02:15:23,775 INFO [PEWorker-5] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-18 02:15:23,775 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43727] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-18 02:15:23,776 DEBUG [Listener at localhost/42081-EventThread] zookeeper.ZKWatcher(600): master:43727-0x101763659290000, quorum=127.0.0.1:53987, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/np1 2023-07-18 02:15:23,776 DEBUG [Listener at localhost/42081-EventThread] zookeeper.ZKWatcher(600): master:43727-0x101763659290000, quorum=127.0.0.1:53987, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-18 02:15:23,777 INFO [PEWorker-5] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-18 02:15:23,779 INFO [PEWorker-5] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-18 02:15:23,780 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=24, state=SUCCESS; DeleteNamespaceProcedure, namespace=np1 in 18 msec 2023-07-18 02:15:23,876 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43727] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-18 02:15:23,876 INFO [Listener at localhost/42081] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-18 02:15:23,876 INFO [Listener at localhost/42081] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-18 02:15:23,877 DEBUG [Listener at localhost/42081] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x06a1b9d4 to 127.0.0.1:53987 2023-07-18 02:15:23,877 DEBUG [Listener at localhost/42081] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 02:15:23,877 DEBUG [Listener at localhost/42081] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-18 02:15:23,877 DEBUG [Listener at localhost/42081] util.JVMClusterUtil(257): Found active master hash=434512604, stopped=false 2023-07-18 02:15:23,877 DEBUG [Listener at localhost/42081] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-18 02:15:23,877 DEBUG [Listener at localhost/42081] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-18 02:15:23,877 DEBUG [Listener at localhost/42081] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver 2023-07-18 02:15:23,877 INFO [Listener at localhost/42081] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,43727,1689646519894 2023-07-18 02:15:23,879 DEBUG [Listener at localhost/42081-EventThread] zookeeper.ZKWatcher(600): regionserver:36883-0x101763659290003, quorum=127.0.0.1:53987, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-18 02:15:23,879 DEBUG [Listener at localhost/42081-EventThread] zookeeper.ZKWatcher(600): regionserver:46199-0x101763659290002, quorum=127.0.0.1:53987, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-18 02:15:23,879 INFO [Listener at localhost/42081] procedure2.ProcedureExecutor(629): Stopping 2023-07-18 02:15:23,879 DEBUG [Listener at localhost/42081-EventThread] zookeeper.ZKWatcher(600): master:43727-0x101763659290000, quorum=127.0.0.1:53987, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-18 02:15:23,879 DEBUG [Listener at localhost/42081-EventThread] zookeeper.ZKWatcher(600): master:43727-0x101763659290000, quorum=127.0.0.1:53987, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 02:15:23,879 DEBUG [Listener at localhost/42081-EventThread] zookeeper.ZKWatcher(600): regionserver:37933-0x101763659290001, quorum=127.0.0.1:53987, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-18 02:15:23,881 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:46199-0x101763659290002, quorum=127.0.0.1:53987, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 02:15:23,881 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:36883-0x101763659290003, quorum=127.0.0.1:53987, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 02:15:23,881 DEBUG [Listener at localhost/42081] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x6fc90313 to 127.0.0.1:53987 2023-07-18 02:15:23,881 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:37933-0x101763659290001, quorum=127.0.0.1:53987, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 02:15:23,881 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:43727-0x101763659290000, quorum=127.0.0.1:53987, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 02:15:23,881 DEBUG [Listener at localhost/42081] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 02:15:23,881 INFO [Listener at localhost/42081] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,37933,1689646520178' ***** 2023-07-18 02:15:23,881 INFO [Listener at localhost/42081] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-18 02:15:23,882 INFO [RS:0;jenkins-hbase4:37933] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-18 02:15:23,883 INFO [Listener at localhost/42081] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,46199,1689646520335' ***** 2023-07-18 02:15:23,888 INFO [Listener at localhost/42081] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-18 02:15:23,888 INFO [Listener at localhost/42081] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,36883,1689646520497' ***** 2023-07-18 02:15:23,888 INFO [Listener at localhost/42081] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-18 02:15:23,888 INFO [RS:1;jenkins-hbase4:46199] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-18 02:15:23,888 INFO [RS:2;jenkins-hbase4:36883] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-18 02:15:23,898 INFO [RS:2;jenkins-hbase4:36883] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@277c98ff{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-18 02:15:23,898 INFO [RS:1;jenkins-hbase4:46199] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@5018e9ad{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-18 02:15:23,898 INFO [RS:0;jenkins-hbase4:37933] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@6e657e93{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-18 02:15:23,898 INFO [RS:2;jenkins-hbase4:36883] server.AbstractConnector(383): Stopped ServerConnector@2959891b{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 02:15:23,898 INFO [RS:0;jenkins-hbase4:37933] server.AbstractConnector(383): Stopped ServerConnector@1c79df8{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 02:15:23,898 INFO [RS:2;jenkins-hbase4:36883] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-18 02:15:23,898 INFO [RS:1;jenkins-hbase4:46199] server.AbstractConnector(383): Stopped ServerConnector@7f94613e{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 02:15:23,898 INFO [RS:0;jenkins-hbase4:37933] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-18 02:15:23,899 INFO [RS:1;jenkins-hbase4:46199] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-18 02:15:23,899 INFO [RS:2;jenkins-hbase4:36883] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@3d052187{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-18 02:15:23,901 INFO [RS:0;jenkins-hbase4:37933] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6c4b4518{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-18 02:15:23,901 INFO [RS:1;jenkins-hbase4:46199] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@606d5d1f{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-18 02:15:23,901 INFO [RS:0;jenkins-hbase4:37933] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@fb6e2b4{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/dcf32670-6396-6fa3-250d-b264fc0f6dfa/hadoop.log.dir/,STOPPED} 2023-07-18 02:15:23,902 INFO [RS:1;jenkins-hbase4:46199] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@36c8a553{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/dcf32670-6396-6fa3-250d-b264fc0f6dfa/hadoop.log.dir/,STOPPED} 2023-07-18 02:15:23,901 INFO [RS:2;jenkins-hbase4:36883] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@125559c0{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/dcf32670-6396-6fa3-250d-b264fc0f6dfa/hadoop.log.dir/,STOPPED} 2023-07-18 02:15:23,902 INFO [RS:2;jenkins-hbase4:36883] regionserver.HeapMemoryManager(220): Stopping 2023-07-18 02:15:23,903 INFO [RS:2;jenkins-hbase4:36883] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-18 02:15:23,903 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-18 02:15:23,903 INFO [RS:2;jenkins-hbase4:36883] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-18 02:15:23,904 INFO [RS:2;jenkins-hbase4:36883] regionserver.HRegionServer(3305): Received CLOSE for f07eda6b0430276b1c086c402e90a1e5 2023-07-18 02:15:23,904 INFO [RS:0;jenkins-hbase4:37933] regionserver.HeapMemoryManager(220): Stopping 2023-07-18 02:15:23,904 INFO [RS:1;jenkins-hbase4:46199] regionserver.HeapMemoryManager(220): Stopping 2023-07-18 02:15:23,904 INFO [RS:0;jenkins-hbase4:37933] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-18 02:15:23,905 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-18 02:15:23,905 INFO [RS:0;jenkins-hbase4:37933] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-18 02:15:23,905 INFO [RS:1;jenkins-hbase4:46199] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-18 02:15:23,905 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-18 02:15:23,905 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing f07eda6b0430276b1c086c402e90a1e5, disabling compactions & flushes 2023-07-18 02:15:23,904 INFO [RS:2;jenkins-hbase4:36883] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,36883,1689646520497 2023-07-18 02:15:23,906 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:quota,,1689646522361.f07eda6b0430276b1c086c402e90a1e5. 2023-07-18 02:15:23,906 DEBUG [RS:2;jenkins-hbase4:36883] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x233a5539 to 127.0.0.1:53987 2023-07-18 02:15:23,906 INFO [RS:1;jenkins-hbase4:46199] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-18 02:15:23,906 INFO [RS:0;jenkins-hbase4:37933] regionserver.HRegionServer(3305): Received CLOSE for 1731d73df076ff213586f812958d935c 2023-07-18 02:15:23,907 INFO [RS:1;jenkins-hbase4:46199] regionserver.HRegionServer(3305): Received CLOSE for 56ce0d277bd1ede2fe41f5e8c85d5042 2023-07-18 02:15:23,906 DEBUG [RS:2;jenkins-hbase4:36883] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 02:15:23,907 INFO [RS:2;jenkins-hbase4:36883] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-18 02:15:23,906 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1689646522361.f07eda6b0430276b1c086c402e90a1e5. 2023-07-18 02:15:23,907 DEBUG [RS:2;jenkins-hbase4:36883] regionserver.HRegionServer(1478): Online Regions={f07eda6b0430276b1c086c402e90a1e5=hbase:quota,,1689646522361.f07eda6b0430276b1c086c402e90a1e5.} 2023-07-18 02:15:23,907 INFO [RS:0;jenkins-hbase4:37933] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,37933,1689646520178 2023-07-18 02:15:23,909 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1731d73df076ff213586f812958d935c, disabling compactions & flushes 2023-07-18 02:15:23,909 DEBUG [RS:0;jenkins-hbase4:37933] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x0af16196 to 127.0.0.1:53987 2023-07-18 02:15:23,907 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1689646522361.f07eda6b0430276b1c086c402e90a1e5. after waiting 0 ms 2023-07-18 02:15:23,909 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1689646522361.f07eda6b0430276b1c086c402e90a1e5. 2023-07-18 02:15:23,907 INFO [RS:1;jenkins-hbase4:46199] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,46199,1689646520335 2023-07-18 02:15:23,909 DEBUG [RS:0;jenkins-hbase4:37933] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 02:15:23,909 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689646521708.1731d73df076ff213586f812958d935c. 2023-07-18 02:15:23,909 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 56ce0d277bd1ede2fe41f5e8c85d5042, disabling compactions & flushes 2023-07-18 02:15:23,909 DEBUG [RS:2;jenkins-hbase4:36883] regionserver.HRegionServer(1504): Waiting on f07eda6b0430276b1c086c402e90a1e5 2023-07-18 02:15:23,910 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689646521888.56ce0d277bd1ede2fe41f5e8c85d5042. 2023-07-18 02:15:23,910 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689646521708.1731d73df076ff213586f812958d935c. 2023-07-18 02:15:23,910 INFO [RS:0;jenkins-hbase4:37933] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-18 02:15:23,910 DEBUG [RS:1;jenkins-hbase4:46199] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x187f46a4 to 127.0.0.1:53987 2023-07-18 02:15:23,910 INFO [RS:0;jenkins-hbase4:37933] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-18 02:15:23,910 INFO [RS:0;jenkins-hbase4:37933] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-18 02:15:23,910 INFO [RS:0;jenkins-hbase4:37933] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-18 02:15:23,910 INFO [RS:0;jenkins-hbase4:37933] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-07-18 02:15:23,911 DEBUG [RS:0;jenkins-hbase4:37933] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740, 1731d73df076ff213586f812958d935c=hbase:namespace,,1689646521708.1731d73df076ff213586f812958d935c.} 2023-07-18 02:15:23,911 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-18 02:15:23,911 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-18 02:15:23,911 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-18 02:15:23,911 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-18 02:15:23,911 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-18 02:15:23,911 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=5.89 KB heapSize=11.09 KB 2023-07-18 02:15:23,910 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689646521708.1731d73df076ff213586f812958d935c. after waiting 0 ms 2023-07-18 02:15:23,912 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689646521708.1731d73df076ff213586f812958d935c. 2023-07-18 02:15:23,912 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1731d73df076ff213586f812958d935c 1/1 column families, dataSize=215 B heapSize=776 B 2023-07-18 02:15:23,916 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/data/hbase/quota/f07eda6b0430276b1c086c402e90a1e5/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 02:15:23,917 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:quota,,1689646522361.f07eda6b0430276b1c086c402e90a1e5. 2023-07-18 02:15:23,917 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for f07eda6b0430276b1c086c402e90a1e5: 2023-07-18 02:15:23,917 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:quota,,1689646522361.f07eda6b0430276b1c086c402e90a1e5. 2023-07-18 02:15:23,910 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689646521888.56ce0d277bd1ede2fe41f5e8c85d5042. 2023-07-18 02:15:23,911 DEBUG [RS:0;jenkins-hbase4:37933] regionserver.HRegionServer(1504): Waiting on 1588230740, 1731d73df076ff213586f812958d935c 2023-07-18 02:15:23,910 DEBUG [RS:1;jenkins-hbase4:46199] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 02:15:23,919 INFO [RS:1;jenkins-hbase4:46199] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-18 02:15:23,919 DEBUG [RS:1;jenkins-hbase4:46199] regionserver.HRegionServer(1478): Online Regions={56ce0d277bd1ede2fe41f5e8c85d5042=hbase:rsgroup,,1689646521888.56ce0d277bd1ede2fe41f5e8c85d5042.} 2023-07-18 02:15:23,919 DEBUG [RS:1;jenkins-hbase4:46199] regionserver.HRegionServer(1504): Waiting on 56ce0d277bd1ede2fe41f5e8c85d5042 2023-07-18 02:15:23,919 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689646521888.56ce0d277bd1ede2fe41f5e8c85d5042. after waiting 0 ms 2023-07-18 02:15:23,919 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689646521888.56ce0d277bd1ede2fe41f5e8c85d5042. 2023-07-18 02:15:23,920 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 56ce0d277bd1ede2fe41f5e8c85d5042 1/1 column families, dataSize=633 B heapSize=1.09 KB 2023-07-18 02:15:23,940 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-18 02:15:23,944 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=215 B at sequenceid=8 (bloomFilter=true), to=hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/data/hbase/namespace/1731d73df076ff213586f812958d935c/.tmp/info/007c50755490483d8a460d89ab466f82 2023-07-18 02:15:23,944 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=5.26 KB at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/data/hbase/meta/1588230740/.tmp/info/0a136aff6a2b4702a398ccccdee1e05e 2023-07-18 02:15:23,944 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-18 02:15:23,945 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-18 02:15:23,954 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=633 B at sequenceid=7 (bloomFilter=true), to=hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/data/hbase/rsgroup/56ce0d277bd1ede2fe41f5e8c85d5042/.tmp/m/2d80b408f697482db42de575e0712603 2023-07-18 02:15:23,960 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 0a136aff6a2b4702a398ccccdee1e05e 2023-07-18 02:15:23,969 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 007c50755490483d8a460d89ab466f82 2023-07-18 02:15:23,969 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/data/hbase/rsgroup/56ce0d277bd1ede2fe41f5e8c85d5042/.tmp/m/2d80b408f697482db42de575e0712603 as hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/data/hbase/rsgroup/56ce0d277bd1ede2fe41f5e8c85d5042/m/2d80b408f697482db42de575e0712603 2023-07-18 02:15:23,970 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/data/hbase/namespace/1731d73df076ff213586f812958d935c/.tmp/info/007c50755490483d8a460d89ab466f82 as hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/data/hbase/namespace/1731d73df076ff213586f812958d935c/info/007c50755490483d8a460d89ab466f82 2023-07-18 02:15:23,976 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/data/hbase/rsgroup/56ce0d277bd1ede2fe41f5e8c85d5042/m/2d80b408f697482db42de575e0712603, entries=1, sequenceid=7, filesize=4.9 K 2023-07-18 02:15:23,976 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 007c50755490483d8a460d89ab466f82 2023-07-18 02:15:23,976 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/data/hbase/namespace/1731d73df076ff213586f812958d935c/info/007c50755490483d8a460d89ab466f82, entries=3, sequenceid=8, filesize=5.0 K 2023-07-18 02:15:23,978 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~633 B/633, heapSize ~1.07 KB/1096, currentSize=0 B/0 for 56ce0d277bd1ede2fe41f5e8c85d5042 in 58ms, sequenceid=7, compaction requested=false 2023-07-18 02:15:23,979 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-18 02:15:23,989 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~215 B/215, heapSize ~760 B/760, currentSize=0 B/0 for 1731d73df076ff213586f812958d935c in 77ms, sequenceid=8, compaction requested=false 2023-07-18 02:15:23,989 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-18 02:15:23,996 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/data/hbase/rsgroup/56ce0d277bd1ede2fe41f5e8c85d5042/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=1 2023-07-18 02:15:23,997 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-18 02:15:23,997 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689646521888.56ce0d277bd1ede2fe41f5e8c85d5042. 2023-07-18 02:15:23,997 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 56ce0d277bd1ede2fe41f5e8c85d5042: 2023-07-18 02:15:23,997 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689646521888.56ce0d277bd1ede2fe41f5e8c85d5042. 2023-07-18 02:15:24,003 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=90 B at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/data/hbase/meta/1588230740/.tmp/rep_barrier/36fe5f7a95ea4343aec0dcaa7a5f7618 2023-07-18 02:15:24,003 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/data/hbase/namespace/1731d73df076ff213586f812958d935c/recovered.edits/11.seqid, newMaxSeqId=11, maxSeqId=1 2023-07-18 02:15:24,004 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689646521708.1731d73df076ff213586f812958d935c. 2023-07-18 02:15:24,004 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1731d73df076ff213586f812958d935c: 2023-07-18 02:15:24,004 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689646521708.1731d73df076ff213586f812958d935c. 2023-07-18 02:15:24,009 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 36fe5f7a95ea4343aec0dcaa7a5f7618 2023-07-18 02:15:24,021 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=562 B at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/data/hbase/meta/1588230740/.tmp/table/39deee874bd04ba6a93d6d219357bb30 2023-07-18 02:15:24,027 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 39deee874bd04ba6a93d6d219357bb30 2023-07-18 02:15:24,028 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/data/hbase/meta/1588230740/.tmp/info/0a136aff6a2b4702a398ccccdee1e05e as hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/data/hbase/meta/1588230740/info/0a136aff6a2b4702a398ccccdee1e05e 2023-07-18 02:15:24,035 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 0a136aff6a2b4702a398ccccdee1e05e 2023-07-18 02:15:24,035 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/data/hbase/meta/1588230740/info/0a136aff6a2b4702a398ccccdee1e05e, entries=32, sequenceid=31, filesize=8.5 K 2023-07-18 02:15:24,036 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/data/hbase/meta/1588230740/.tmp/rep_barrier/36fe5f7a95ea4343aec0dcaa7a5f7618 as hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/data/hbase/meta/1588230740/rep_barrier/36fe5f7a95ea4343aec0dcaa7a5f7618 2023-07-18 02:15:24,044 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 36fe5f7a95ea4343aec0dcaa7a5f7618 2023-07-18 02:15:24,044 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/data/hbase/meta/1588230740/rep_barrier/36fe5f7a95ea4343aec0dcaa7a5f7618, entries=1, sequenceid=31, filesize=4.9 K 2023-07-18 02:15:24,045 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/data/hbase/meta/1588230740/.tmp/table/39deee874bd04ba6a93d6d219357bb30 as hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/data/hbase/meta/1588230740/table/39deee874bd04ba6a93d6d219357bb30 2023-07-18 02:15:24,051 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 39deee874bd04ba6a93d6d219357bb30 2023-07-18 02:15:24,052 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/data/hbase/meta/1588230740/table/39deee874bd04ba6a93d6d219357bb30, entries=8, sequenceid=31, filesize=5.2 K 2023-07-18 02:15:24,053 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~5.89 KB/6036, heapSize ~11.05 KB/11312, currentSize=0 B/0 for 1588230740 in 142ms, sequenceid=31, compaction requested=false 2023-07-18 02:15:24,053 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-18 02:15:24,068 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/data/hbase/meta/1588230740/recovered.edits/34.seqid, newMaxSeqId=34, maxSeqId=1 2023-07-18 02:15:24,068 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-18 02:15:24,069 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-18 02:15:24,069 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-18 02:15:24,069 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-18 02:15:24,110 INFO [RS:2;jenkins-hbase4:36883] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,36883,1689646520497; all regions closed. 2023-07-18 02:15:24,110 DEBUG [RS:2;jenkins-hbase4:36883] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-18 02:15:24,119 INFO [RS:0;jenkins-hbase4:37933] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,37933,1689646520178; all regions closed. 2023-07-18 02:15:24,119 DEBUG [RS:0;jenkins-hbase4:37933] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-18 02:15:24,119 INFO [RS:1;jenkins-hbase4:46199] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,46199,1689646520335; all regions closed. 2023-07-18 02:15:24,120 DEBUG [RS:1;jenkins-hbase4:46199] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-18 02:15:24,121 DEBUG [RS:2;jenkins-hbase4:36883] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/oldWALs 2023-07-18 02:15:24,121 INFO [RS:2;jenkins-hbase4:36883] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C36883%2C1689646520497:(num 1689646521572) 2023-07-18 02:15:24,121 DEBUG [RS:2;jenkins-hbase4:36883] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 02:15:24,121 INFO [RS:2;jenkins-hbase4:36883] regionserver.LeaseManager(133): Closed leases 2023-07-18 02:15:24,128 INFO [RS:2;jenkins-hbase4:36883] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-18 02:15:24,128 INFO [RS:2;jenkins-hbase4:36883] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-18 02:15:24,128 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-18 02:15:24,128 INFO [RS:2;jenkins-hbase4:36883] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-18 02:15:24,128 INFO [RS:2;jenkins-hbase4:36883] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-18 02:15:24,145 INFO [RS:2;jenkins-hbase4:36883] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:36883 2023-07-18 02:15:24,151 DEBUG [Listener at localhost/42081-EventThread] zookeeper.ZKWatcher(600): master:43727-0x101763659290000, quorum=127.0.0.1:53987, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 02:15:24,151 DEBUG [Listener at localhost/42081-EventThread] zookeeper.ZKWatcher(600): regionserver:37933-0x101763659290001, quorum=127.0.0.1:53987, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,36883,1689646520497 2023-07-18 02:15:24,151 DEBUG [Listener at localhost/42081-EventThread] zookeeper.ZKWatcher(600): regionserver:36883-0x101763659290003, quorum=127.0.0.1:53987, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,36883,1689646520497 2023-07-18 02:15:24,152 DEBUG [Listener at localhost/42081-EventThread] zookeeper.ZKWatcher(600): regionserver:37933-0x101763659290001, quorum=127.0.0.1:53987, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 02:15:24,152 DEBUG [Listener at localhost/42081-EventThread] zookeeper.ZKWatcher(600): regionserver:36883-0x101763659290003, quorum=127.0.0.1:53987, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 02:15:24,152 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,36883,1689646520497] 2023-07-18 02:15:24,152 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,36883,1689646520497; numProcessing=1 2023-07-18 02:15:24,152 DEBUG [Listener at localhost/42081-EventThread] zookeeper.ZKWatcher(600): regionserver:46199-0x101763659290002, quorum=127.0.0.1:53987, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,36883,1689646520497 2023-07-18 02:15:24,152 DEBUG [Listener at localhost/42081-EventThread] zookeeper.ZKWatcher(600): regionserver:46199-0x101763659290002, quorum=127.0.0.1:53987, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 02:15:24,155 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,36883,1689646520497 already deleted, retry=false 2023-07-18 02:15:24,155 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,36883,1689646520497 expired; onlineServers=2 2023-07-18 02:15:24,157 DEBUG [RS:0;jenkins-hbase4:37933] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/oldWALs 2023-07-18 02:15:24,157 INFO [RS:0;jenkins-hbase4:37933] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C37933%2C1689646520178.meta:.meta(num 1689646521656) 2023-07-18 02:15:24,174 DEBUG [RS:1;jenkins-hbase4:46199] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/oldWALs 2023-07-18 02:15:24,174 INFO [RS:1;jenkins-hbase4:46199] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C46199%2C1689646520335:(num 1689646521572) 2023-07-18 02:15:24,174 DEBUG [RS:1;jenkins-hbase4:46199] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 02:15:24,174 INFO [RS:1;jenkins-hbase4:46199] regionserver.LeaseManager(133): Closed leases 2023-07-18 02:15:24,175 INFO [RS:1;jenkins-hbase4:46199] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-18 02:15:24,175 INFO [RS:1;jenkins-hbase4:46199] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-18 02:15:24,175 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-18 02:15:24,175 INFO [RS:1;jenkins-hbase4:46199] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-18 02:15:24,175 INFO [RS:1;jenkins-hbase4:46199] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-18 02:15:24,176 INFO [RS:1;jenkins-hbase4:46199] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:46199 2023-07-18 02:15:24,177 DEBUG [RS:0;jenkins-hbase4:37933] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/oldWALs 2023-07-18 02:15:24,177 INFO [RS:0;jenkins-hbase4:37933] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C37933%2C1689646520178:(num 1689646521575) 2023-07-18 02:15:24,177 DEBUG [RS:0;jenkins-hbase4:37933] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 02:15:24,177 INFO [RS:0;jenkins-hbase4:37933] regionserver.LeaseManager(133): Closed leases 2023-07-18 02:15:24,177 INFO [RS:0;jenkins-hbase4:37933] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-18 02:15:24,178 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-18 02:15:24,180 INFO [RS:0;jenkins-hbase4:37933] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:37933 2023-07-18 02:15:24,258 DEBUG [Listener at localhost/42081-EventThread] zookeeper.ZKWatcher(600): regionserver:36883-0x101763659290003, quorum=127.0.0.1:53987, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 02:15:24,258 INFO [RS:2;jenkins-hbase4:36883] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,36883,1689646520497; zookeeper connection closed. 2023-07-18 02:15:24,258 DEBUG [Listener at localhost/42081-EventThread] zookeeper.ZKWatcher(600): regionserver:36883-0x101763659290003, quorum=127.0.0.1:53987, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 02:15:24,259 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@64d5eb30] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@64d5eb30 2023-07-18 02:15:24,259 DEBUG [Listener at localhost/42081-EventThread] zookeeper.ZKWatcher(600): master:43727-0x101763659290000, quorum=127.0.0.1:53987, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 02:15:24,259 DEBUG [Listener at localhost/42081-EventThread] zookeeper.ZKWatcher(600): regionserver:37933-0x101763659290001, quorum=127.0.0.1:53987, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,46199,1689646520335 2023-07-18 02:15:24,261 DEBUG [Listener at localhost/42081-EventThread] zookeeper.ZKWatcher(600): regionserver:37933-0x101763659290001, quorum=127.0.0.1:53987, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,37933,1689646520178 2023-07-18 02:15:24,259 DEBUG [Listener at localhost/42081-EventThread] zookeeper.ZKWatcher(600): regionserver:46199-0x101763659290002, quorum=127.0.0.1:53987, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,46199,1689646520335 2023-07-18 02:15:24,261 DEBUG [Listener at localhost/42081-EventThread] zookeeper.ZKWatcher(600): regionserver:46199-0x101763659290002, quorum=127.0.0.1:53987, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,37933,1689646520178 2023-07-18 02:15:24,261 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,46199,1689646520335] 2023-07-18 02:15:24,261 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,46199,1689646520335; numProcessing=2 2023-07-18 02:15:24,264 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,46199,1689646520335 already deleted, retry=false 2023-07-18 02:15:24,264 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,46199,1689646520335 expired; onlineServers=1 2023-07-18 02:15:24,264 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,37933,1689646520178] 2023-07-18 02:15:24,264 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,37933,1689646520178; numProcessing=3 2023-07-18 02:15:24,265 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,37933,1689646520178 already deleted, retry=false 2023-07-18 02:15:24,265 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,37933,1689646520178 expired; onlineServers=0 2023-07-18 02:15:24,265 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,43727,1689646519894' ***** 2023-07-18 02:15:24,265 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-18 02:15:24,266 DEBUG [M:0;jenkins-hbase4:43727] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1dd9b538, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-18 02:15:24,266 INFO [M:0;jenkins-hbase4:43727] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-18 02:15:24,268 DEBUG [Listener at localhost/42081-EventThread] zookeeper.ZKWatcher(600): master:43727-0x101763659290000, quorum=127.0.0.1:53987, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-18 02:15:24,268 DEBUG [Listener at localhost/42081-EventThread] zookeeper.ZKWatcher(600): master:43727-0x101763659290000, quorum=127.0.0.1:53987, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 02:15:24,268 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:43727-0x101763659290000, quorum=127.0.0.1:53987, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-18 02:15:24,268 INFO [M:0;jenkins-hbase4:43727] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@19a648c7{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-18 02:15:24,269 INFO [M:0;jenkins-hbase4:43727] server.AbstractConnector(383): Stopped ServerConnector@17e6870e{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 02:15:24,269 INFO [M:0;jenkins-hbase4:43727] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-18 02:15:24,269 INFO [M:0;jenkins-hbase4:43727] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5959e4bb{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-18 02:15:24,269 INFO [M:0;jenkins-hbase4:43727] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6bab3cdc{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/dcf32670-6396-6fa3-250d-b264fc0f6dfa/hadoop.log.dir/,STOPPED} 2023-07-18 02:15:24,270 INFO [M:0;jenkins-hbase4:43727] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,43727,1689646519894 2023-07-18 02:15:24,270 INFO [M:0;jenkins-hbase4:43727] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,43727,1689646519894; all regions closed. 2023-07-18 02:15:24,270 DEBUG [M:0;jenkins-hbase4:43727] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 02:15:24,270 INFO [M:0;jenkins-hbase4:43727] master.HMaster(1491): Stopping master jetty server 2023-07-18 02:15:24,270 INFO [M:0;jenkins-hbase4:43727] server.AbstractConnector(383): Stopped ServerConnector@50d1e9c0{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 02:15:24,271 DEBUG [M:0;jenkins-hbase4:43727] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-18 02:15:24,271 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-18 02:15:24,271 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689646521293] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689646521293,5,FailOnTimeoutGroup] 2023-07-18 02:15:24,271 DEBUG [M:0;jenkins-hbase4:43727] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-18 02:15:24,272 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689646521293] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689646521293,5,FailOnTimeoutGroup] 2023-07-18 02:15:24,272 INFO [M:0;jenkins-hbase4:43727] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-18 02:15:24,272 INFO [M:0;jenkins-hbase4:43727] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-18 02:15:24,273 INFO [M:0;jenkins-hbase4:43727] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS] on shutdown 2023-07-18 02:15:24,273 DEBUG [M:0;jenkins-hbase4:43727] master.HMaster(1512): Stopping service threads 2023-07-18 02:15:24,273 INFO [M:0;jenkins-hbase4:43727] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-18 02:15:24,273 ERROR [M:0;jenkins-hbase4:43727] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-18 02:15:24,274 INFO [M:0;jenkins-hbase4:43727] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-18 02:15:24,274 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-18 02:15:24,274 DEBUG [M:0;jenkins-hbase4:43727] zookeeper.ZKUtil(398): master:43727-0x101763659290000, quorum=127.0.0.1:53987, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-18 02:15:24,274 WARN [M:0;jenkins-hbase4:43727] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-18 02:15:24,274 INFO [M:0;jenkins-hbase4:43727] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-18 02:15:24,275 INFO [M:0;jenkins-hbase4:43727] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-18 02:15:24,275 DEBUG [M:0;jenkins-hbase4:43727] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-18 02:15:24,275 INFO [M:0;jenkins-hbase4:43727] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 02:15:24,275 DEBUG [M:0;jenkins-hbase4:43727] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 02:15:24,275 DEBUG [M:0;jenkins-hbase4:43727] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-18 02:15:24,275 DEBUG [M:0;jenkins-hbase4:43727] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 02:15:24,275 INFO [M:0;jenkins-hbase4:43727] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=93.00 KB heapSize=109.16 KB 2023-07-18 02:15:24,290 INFO [M:0;jenkins-hbase4:43727] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=93.00 KB at sequenceid=194 (bloomFilter=true), to=hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/2ba7653e96ff46c4a9fdee86d9ccc702 2023-07-18 02:15:24,302 DEBUG [M:0;jenkins-hbase4:43727] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/2ba7653e96ff46c4a9fdee86d9ccc702 as hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/2ba7653e96ff46c4a9fdee86d9ccc702 2023-07-18 02:15:24,309 INFO [M:0;jenkins-hbase4:43727] regionserver.HStore(1080): Added hdfs://localhost:45369/user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/2ba7653e96ff46c4a9fdee86d9ccc702, entries=24, sequenceid=194, filesize=12.4 K 2023-07-18 02:15:24,311 INFO [M:0;jenkins-hbase4:43727] regionserver.HRegion(2948): Finished flush of dataSize ~93.00 KB/95234, heapSize ~109.14 KB/111760, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 35ms, sequenceid=194, compaction requested=false 2023-07-18 02:15:24,321 INFO [M:0;jenkins-hbase4:43727] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 02:15:24,321 DEBUG [M:0;jenkins-hbase4:43727] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-18 02:15:24,330 WARN [Close-WAL-Writer-0] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(641): complete file /user/jenkins/test-data/e03c3b26-b353-ec05-c668-9182fc895ea0/MasterData/WALs/jenkins-hbase4.apache.org,43727,1689646519894/jenkins-hbase4.apache.org%2C43727%2C1689646519894.1689646521204 not finished, retry = 0 2023-07-18 02:15:24,431 INFO [M:0;jenkins-hbase4:43727] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-18 02:15:24,431 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-18 02:15:24,432 INFO [M:0;jenkins-hbase4:43727] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:43727 2023-07-18 02:15:24,434 DEBUG [M:0;jenkins-hbase4:43727] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,43727,1689646519894 already deleted, retry=false 2023-07-18 02:15:24,480 DEBUG [Listener at localhost/42081-EventThread] zookeeper.ZKWatcher(600): regionserver:37933-0x101763659290001, quorum=127.0.0.1:53987, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 02:15:24,480 INFO [RS:0;jenkins-hbase4:37933] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,37933,1689646520178; zookeeper connection closed. 2023-07-18 02:15:24,480 DEBUG [Listener at localhost/42081-EventThread] zookeeper.ZKWatcher(600): regionserver:37933-0x101763659290001, quorum=127.0.0.1:53987, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 02:15:24,482 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@52be3f06] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@52be3f06 2023-07-18 02:15:24,580 DEBUG [Listener at localhost/42081-EventThread] zookeeper.ZKWatcher(600): regionserver:46199-0x101763659290002, quorum=127.0.0.1:53987, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 02:15:24,580 INFO [RS:1;jenkins-hbase4:46199] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,46199,1689646520335; zookeeper connection closed. 2023-07-18 02:15:24,580 DEBUG [Listener at localhost/42081-EventThread] zookeeper.ZKWatcher(600): regionserver:46199-0x101763659290002, quorum=127.0.0.1:53987, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 02:15:24,580 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@756d3604] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@756d3604 2023-07-18 02:15:24,581 INFO [Listener at localhost/42081] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 3 regionserver(s) complete 2023-07-18 02:15:24,680 DEBUG [Listener at localhost/42081-EventThread] zookeeper.ZKWatcher(600): master:43727-0x101763659290000, quorum=127.0.0.1:53987, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 02:15:24,680 INFO [M:0;jenkins-hbase4:43727] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,43727,1689646519894; zookeeper connection closed. 2023-07-18 02:15:24,680 DEBUG [Listener at localhost/42081-EventThread] zookeeper.ZKWatcher(600): master:43727-0x101763659290000, quorum=127.0.0.1:53987, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 02:15:24,682 WARN [Listener at localhost/42081] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-18 02:15:24,685 INFO [Listener at localhost/42081] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-18 02:15:24,790 WARN [BP-1025603019-172.31.14.131-1689646518980 heartbeating to localhost/127.0.0.1:45369] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-18 02:15:24,791 WARN [BP-1025603019-172.31.14.131-1689646518980 heartbeating to localhost/127.0.0.1:45369] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1025603019-172.31.14.131-1689646518980 (Datanode Uuid bfce7ef3-d642-4887-bccd-e2277610874f) service to localhost/127.0.0.1:45369 2023-07-18 02:15:24,792 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/dcf32670-6396-6fa3-250d-b264fc0f6dfa/cluster_9a8fabb4-8f95-7b44-10e3-85eaa675d67d/dfs/data/data5/current/BP-1025603019-172.31.14.131-1689646518980] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 02:15:24,792 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/dcf32670-6396-6fa3-250d-b264fc0f6dfa/cluster_9a8fabb4-8f95-7b44-10e3-85eaa675d67d/dfs/data/data6/current/BP-1025603019-172.31.14.131-1689646518980] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 02:15:24,793 WARN [Listener at localhost/42081] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-18 02:15:24,796 INFO [Listener at localhost/42081] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-18 02:15:24,901 WARN [BP-1025603019-172.31.14.131-1689646518980 heartbeating to localhost/127.0.0.1:45369] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-18 02:15:24,902 WARN [BP-1025603019-172.31.14.131-1689646518980 heartbeating to localhost/127.0.0.1:45369] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1025603019-172.31.14.131-1689646518980 (Datanode Uuid b0b3ac22-6328-42b5-8d35-0e07147bf67a) service to localhost/127.0.0.1:45369 2023-07-18 02:15:24,902 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/dcf32670-6396-6fa3-250d-b264fc0f6dfa/cluster_9a8fabb4-8f95-7b44-10e3-85eaa675d67d/dfs/data/data3/current/BP-1025603019-172.31.14.131-1689646518980] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 02:15:24,902 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/dcf32670-6396-6fa3-250d-b264fc0f6dfa/cluster_9a8fabb4-8f95-7b44-10e3-85eaa675d67d/dfs/data/data4/current/BP-1025603019-172.31.14.131-1689646518980] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 02:15:24,904 WARN [Listener at localhost/42081] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-18 02:15:24,907 INFO [Listener at localhost/42081] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-18 02:15:24,909 WARN [BP-1025603019-172.31.14.131-1689646518980 heartbeating to localhost/127.0.0.1:45369] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-18 02:15:24,909 WARN [BP-1025603019-172.31.14.131-1689646518980 heartbeating to localhost/127.0.0.1:45369] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1025603019-172.31.14.131-1689646518980 (Datanode Uuid 3d685646-bffb-4f36-8628-0960e2d5d90f) service to localhost/127.0.0.1:45369 2023-07-18 02:15:24,910 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/dcf32670-6396-6fa3-250d-b264fc0f6dfa/cluster_9a8fabb4-8f95-7b44-10e3-85eaa675d67d/dfs/data/data1/current/BP-1025603019-172.31.14.131-1689646518980] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 02:15:24,910 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/dcf32670-6396-6fa3-250d-b264fc0f6dfa/cluster_9a8fabb4-8f95-7b44-10e3-85eaa675d67d/dfs/data/data2/current/BP-1025603019-172.31.14.131-1689646518980] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 02:15:24,920 INFO [Listener at localhost/42081] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-18 02:15:25,036 INFO [Listener at localhost/42081] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-18 02:15:25,068 INFO [Listener at localhost/42081] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-07-18 02:15:25,069 INFO [Listener at localhost/42081] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-18 02:15:25,069 INFO [Listener at localhost/42081] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/dcf32670-6396-6fa3-250d-b264fc0f6dfa/hadoop.log.dir so I do NOT create it in target/test-data/527ac157-3429-81c3-f99e-cf221451c37d 2023-07-18 02:15:25,069 INFO [Listener at localhost/42081] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/dcf32670-6396-6fa3-250d-b264fc0f6dfa/hadoop.tmp.dir so I do NOT create it in target/test-data/527ac157-3429-81c3-f99e-cf221451c37d 2023-07-18 02:15:25,069 INFO [Listener at localhost/42081] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/527ac157-3429-81c3-f99e-cf221451c37d/cluster_80d12f21-14ad-01d3-6749-7b372d00c374, deleteOnExit=true 2023-07-18 02:15:25,069 INFO [Listener at localhost/42081] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-18 02:15:25,069 INFO [Listener at localhost/42081] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/527ac157-3429-81c3-f99e-cf221451c37d/test.cache.data in system properties and HBase conf 2023-07-18 02:15:25,069 INFO [Listener at localhost/42081] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/527ac157-3429-81c3-f99e-cf221451c37d/hadoop.tmp.dir in system properties and HBase conf 2023-07-18 02:15:25,069 INFO [Listener at localhost/42081] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/527ac157-3429-81c3-f99e-cf221451c37d/hadoop.log.dir in system properties and HBase conf 2023-07-18 02:15:25,069 INFO [Listener at localhost/42081] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/527ac157-3429-81c3-f99e-cf221451c37d/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-18 02:15:25,069 INFO [Listener at localhost/42081] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/527ac157-3429-81c3-f99e-cf221451c37d/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-18 02:15:25,069 INFO [Listener at localhost/42081] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-18 02:15:25,070 DEBUG [Listener at localhost/42081] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-18 02:15:25,070 INFO [Listener at localhost/42081] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/527ac157-3429-81c3-f99e-cf221451c37d/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-18 02:15:25,070 INFO [Listener at localhost/42081] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/527ac157-3429-81c3-f99e-cf221451c37d/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-18 02:15:25,070 INFO [Listener at localhost/42081] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/527ac157-3429-81c3-f99e-cf221451c37d/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-18 02:15:25,070 INFO [Listener at localhost/42081] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/527ac157-3429-81c3-f99e-cf221451c37d/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-18 02:15:25,070 INFO [Listener at localhost/42081] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/527ac157-3429-81c3-f99e-cf221451c37d/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-18 02:15:25,070 INFO [Listener at localhost/42081] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/527ac157-3429-81c3-f99e-cf221451c37d/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-18 02:15:25,070 INFO [Listener at localhost/42081] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/527ac157-3429-81c3-f99e-cf221451c37d/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-18 02:15:25,071 INFO [Listener at localhost/42081] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/527ac157-3429-81c3-f99e-cf221451c37d/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-18 02:15:25,071 INFO [Listener at localhost/42081] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/527ac157-3429-81c3-f99e-cf221451c37d/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-18 02:15:25,071 INFO [Listener at localhost/42081] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/527ac157-3429-81c3-f99e-cf221451c37d/nfs.dump.dir in system properties and HBase conf 2023-07-18 02:15:25,071 INFO [Listener at localhost/42081] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/527ac157-3429-81c3-f99e-cf221451c37d/java.io.tmpdir in system properties and HBase conf 2023-07-18 02:15:25,071 INFO [Listener at localhost/42081] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/527ac157-3429-81c3-f99e-cf221451c37d/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-18 02:15:25,071 INFO [Listener at localhost/42081] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/527ac157-3429-81c3-f99e-cf221451c37d/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-18 02:15:25,071 INFO [Listener at localhost/42081] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/527ac157-3429-81c3-f99e-cf221451c37d/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-18 02:15:25,075 WARN [Listener at localhost/42081] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-18 02:15:25,076 WARN [Listener at localhost/42081] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-18 02:15:25,119 WARN [Listener at localhost/42081] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-18 02:15:25,121 INFO [Listener at localhost/42081] log.Slf4jLog(67): jetty-6.1.26 2023-07-18 02:15:25,126 INFO [Listener at localhost/42081] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/527ac157-3429-81c3-f99e-cf221451c37d/java.io.tmpdir/Jetty_localhost_41929_hdfs____wv8zx/webapp 2023-07-18 02:15:25,135 DEBUG [Listener at localhost/42081-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient-0x10176365929000a, quorum=127.0.0.1:53987, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null 2023-07-18 02:15:25,135 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(630): VerifyingRSGroupAdminClient-0x10176365929000a, quorum=127.0.0.1:53987, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring 2023-07-18 02:15:25,217 INFO [Listener at localhost/42081] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:41929 2023-07-18 02:15:25,221 WARN [Listener at localhost/42081] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-18 02:15:25,221 WARN [Listener at localhost/42081] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-18 02:15:25,266 WARN [Listener at localhost/41331] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-18 02:15:25,281 WARN [Listener at localhost/41331] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-18 02:15:25,284 WARN [Listener at localhost/41331] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-18 02:15:25,285 INFO [Listener at localhost/41331] log.Slf4jLog(67): jetty-6.1.26 2023-07-18 02:15:25,290 INFO [Listener at localhost/41331] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/527ac157-3429-81c3-f99e-cf221451c37d/java.io.tmpdir/Jetty_localhost_38923_datanode____.pqmig4/webapp 2023-07-18 02:15:25,383 INFO [Listener at localhost/41331] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38923 2023-07-18 02:15:25,391 WARN [Listener at localhost/33349] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-18 02:15:25,422 WARN [Listener at localhost/33349] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-18 02:15:25,426 WARN [Listener at localhost/33349] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-18 02:15:25,427 INFO [Listener at localhost/33349] log.Slf4jLog(67): jetty-6.1.26 2023-07-18 02:15:25,431 INFO [Listener at localhost/33349] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/527ac157-3429-81c3-f99e-cf221451c37d/java.io.tmpdir/Jetty_localhost_45561_datanode____.ugr9ca/webapp 2023-07-18 02:15:25,508 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xbd07f2545d237f9: Processing first storage report for DS-c23070b9-4420-4118-9f01-dcf5c111c9ec from datanode 5f6b1722-f89b-4e77-a3a2-e2587aacc9d2 2023-07-18 02:15:25,508 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xbd07f2545d237f9: from storage DS-c23070b9-4420-4118-9f01-dcf5c111c9ec node DatanodeRegistration(127.0.0.1:35061, datanodeUuid=5f6b1722-f89b-4e77-a3a2-e2587aacc9d2, infoPort=41575, infoSecurePort=0, ipcPort=33349, storageInfo=lv=-57;cid=testClusterID;nsid=1390602734;c=1689646525078), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-18 02:15:25,508 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xbd07f2545d237f9: Processing first storage report for DS-db26c8c4-4ea5-4eed-8023-538633cb16fc from datanode 5f6b1722-f89b-4e77-a3a2-e2587aacc9d2 2023-07-18 02:15:25,509 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xbd07f2545d237f9: from storage DS-db26c8c4-4ea5-4eed-8023-538633cb16fc node DatanodeRegistration(127.0.0.1:35061, datanodeUuid=5f6b1722-f89b-4e77-a3a2-e2587aacc9d2, infoPort=41575, infoSecurePort=0, ipcPort=33349, storageInfo=lv=-57;cid=testClusterID;nsid=1390602734;c=1689646525078), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-18 02:15:25,537 INFO [Listener at localhost/33349] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45561 2023-07-18 02:15:25,546 WARN [Listener at localhost/34679] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-18 02:15:25,567 WARN [Listener at localhost/34679] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-18 02:15:25,569 WARN [Listener at localhost/34679] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-18 02:15:25,571 INFO [Listener at localhost/34679] log.Slf4jLog(67): jetty-6.1.26 2023-07-18 02:15:25,576 INFO [Listener at localhost/34679] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/527ac157-3429-81c3-f99e-cf221451c37d/java.io.tmpdir/Jetty_localhost_35533_datanode____xk90d6/webapp 2023-07-18 02:15:25,654 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x279c0b97abd14fde: Processing first storage report for DS-3f9faa3e-7a97-475a-a68f-1dd43adc8a7e from datanode cb1d07e8-ef7e-4904-8ccb-8f29344c0d1c 2023-07-18 02:15:25,654 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x279c0b97abd14fde: from storage DS-3f9faa3e-7a97-475a-a68f-1dd43adc8a7e node DatanodeRegistration(127.0.0.1:37759, datanodeUuid=cb1d07e8-ef7e-4904-8ccb-8f29344c0d1c, infoPort=35123, infoSecurePort=0, ipcPort=34679, storageInfo=lv=-57;cid=testClusterID;nsid=1390602734;c=1689646525078), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-18 02:15:25,654 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x279c0b97abd14fde: Processing first storage report for DS-65cc32d7-4e36-440c-8ec0-e756cac58bdc from datanode cb1d07e8-ef7e-4904-8ccb-8f29344c0d1c 2023-07-18 02:15:25,654 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x279c0b97abd14fde: from storage DS-65cc32d7-4e36-440c-8ec0-e756cac58bdc node DatanodeRegistration(127.0.0.1:37759, datanodeUuid=cb1d07e8-ef7e-4904-8ccb-8f29344c0d1c, infoPort=35123, infoSecurePort=0, ipcPort=34679, storageInfo=lv=-57;cid=testClusterID;nsid=1390602734;c=1689646525078), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-18 02:15:25,684 INFO [Listener at localhost/34679] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:35533 2023-07-18 02:15:25,694 WARN [Listener at localhost/42627] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-18 02:15:25,782 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xa49305ea31d2af84: Processing first storage report for DS-1de6045e-347e-487b-a9d9-61a01cb59513 from datanode 8a7e2789-5c6f-422c-abaa-47faba236b2c 2023-07-18 02:15:25,782 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xa49305ea31d2af84: from storage DS-1de6045e-347e-487b-a9d9-61a01cb59513 node DatanodeRegistration(127.0.0.1:39839, datanodeUuid=8a7e2789-5c6f-422c-abaa-47faba236b2c, infoPort=46697, infoSecurePort=0, ipcPort=42627, storageInfo=lv=-57;cid=testClusterID;nsid=1390602734;c=1689646525078), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-18 02:15:25,782 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xa49305ea31d2af84: Processing first storage report for DS-b8b0836a-bf8c-42f9-b3bb-5a5e505301d2 from datanode 8a7e2789-5c6f-422c-abaa-47faba236b2c 2023-07-18 02:15:25,782 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xa49305ea31d2af84: from storage DS-b8b0836a-bf8c-42f9-b3bb-5a5e505301d2 node DatanodeRegistration(127.0.0.1:39839, datanodeUuid=8a7e2789-5c6f-422c-abaa-47faba236b2c, infoPort=46697, infoSecurePort=0, ipcPort=42627, storageInfo=lv=-57;cid=testClusterID;nsid=1390602734;c=1689646525078), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-18 02:15:25,806 DEBUG [Listener at localhost/42627] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/527ac157-3429-81c3-f99e-cf221451c37d 2023-07-18 02:15:25,808 INFO [Listener at localhost/42627] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/527ac157-3429-81c3-f99e-cf221451c37d/cluster_80d12f21-14ad-01d3-6749-7b372d00c374/zookeeper_0, clientPort=64106, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/527ac157-3429-81c3-f99e-cf221451c37d/cluster_80d12f21-14ad-01d3-6749-7b372d00c374/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/527ac157-3429-81c3-f99e-cf221451c37d/cluster_80d12f21-14ad-01d3-6749-7b372d00c374/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-18 02:15:25,809 INFO [Listener at localhost/42627] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=64106 2023-07-18 02:15:25,809 INFO [Listener at localhost/42627] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 02:15:25,810 INFO [Listener at localhost/42627] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 02:15:25,825 INFO [Listener at localhost/42627] util.FSUtils(471): Created version file at hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940 with version=8 2023-07-18 02:15:25,825 INFO [Listener at localhost/42627] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:45101/user/jenkins/test-data/6af1c3a2-c5d1-2318-c5b6-96cab05923a7/hbase-staging 2023-07-18 02:15:25,826 DEBUG [Listener at localhost/42627] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-18 02:15:25,826 DEBUG [Listener at localhost/42627] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-18 02:15:25,826 DEBUG [Listener at localhost/42627] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-18 02:15:25,826 DEBUG [Listener at localhost/42627] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-18 02:15:25,827 INFO [Listener at localhost/42627] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-18 02:15:25,827 INFO [Listener at localhost/42627] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 02:15:25,827 INFO [Listener at localhost/42627] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-18 02:15:25,827 INFO [Listener at localhost/42627] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-18 02:15:25,827 INFO [Listener at localhost/42627] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 02:15:25,827 INFO [Listener at localhost/42627] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-18 02:15:25,827 INFO [Listener at localhost/42627] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-18 02:15:25,829 INFO [Listener at localhost/42627] ipc.NettyRpcServer(120): Bind to /172.31.14.131:34701 2023-07-18 02:15:25,829 INFO [Listener at localhost/42627] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 02:15:25,830 INFO [Listener at localhost/42627] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 02:15:25,831 INFO [Listener at localhost/42627] zookeeper.RecoverableZooKeeper(93): Process identifier=master:34701 connecting to ZooKeeper ensemble=127.0.0.1:64106 2023-07-18 02:15:25,837 DEBUG [Listener at localhost/42627-EventThread] zookeeper.ZKWatcher(600): master:347010x0, quorum=127.0.0.1:64106, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 02:15:25,838 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:34701-0x101763670720000 connected 2023-07-18 02:15:25,859 DEBUG [Listener at localhost/42627] zookeeper.ZKUtil(164): master:34701-0x101763670720000, quorum=127.0.0.1:64106, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-18 02:15:25,859 DEBUG [Listener at localhost/42627] zookeeper.ZKUtil(164): master:34701-0x101763670720000, quorum=127.0.0.1:64106, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 02:15:25,860 DEBUG [Listener at localhost/42627] zookeeper.ZKUtil(164): master:34701-0x101763670720000, quorum=127.0.0.1:64106, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-18 02:15:25,863 DEBUG [Listener at localhost/42627] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=34701 2023-07-18 02:15:25,863 DEBUG [Listener at localhost/42627] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=34701 2023-07-18 02:15:25,863 DEBUG [Listener at localhost/42627] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=34701 2023-07-18 02:15:25,865 DEBUG [Listener at localhost/42627] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=34701 2023-07-18 02:15:25,866 DEBUG [Listener at localhost/42627] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=34701 2023-07-18 02:15:25,868 INFO [Listener at localhost/42627] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-18 02:15:25,868 INFO [Listener at localhost/42627] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-18 02:15:25,868 INFO [Listener at localhost/42627] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-18 02:15:25,869 INFO [Listener at localhost/42627] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-18 02:15:25,869 INFO [Listener at localhost/42627] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-18 02:15:25,869 INFO [Listener at localhost/42627] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-18 02:15:25,869 INFO [Listener at localhost/42627] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-18 02:15:25,869 INFO [Listener at localhost/42627] http.HttpServer(1146): Jetty bound to port 36071 2023-07-18 02:15:25,870 INFO [Listener at localhost/42627] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 02:15:25,873 INFO [Listener at localhost/42627] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 02:15:25,874 INFO [Listener at localhost/42627] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@558dcba1{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/527ac157-3429-81c3-f99e-cf221451c37d/hadoop.log.dir/,AVAILABLE} 2023-07-18 02:15:25,874 INFO [Listener at localhost/42627] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 02:15:25,874 INFO [Listener at localhost/42627] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2ce8454f{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-18 02:15:25,987 INFO [Listener at localhost/42627] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-18 02:15:25,988 INFO [Listener at localhost/42627] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-18 02:15:25,988 INFO [Listener at localhost/42627] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-18 02:15:25,989 INFO [Listener at localhost/42627] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-18 02:15:25,990 INFO [Listener at localhost/42627] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 02:15:25,991 INFO [Listener at localhost/42627] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@21d993e9{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/527ac157-3429-81c3-f99e-cf221451c37d/java.io.tmpdir/jetty-0_0_0_0-36071-hbase-server-2_4_18-SNAPSHOT_jar-_-any-3165746715335771173/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-18 02:15:25,993 INFO [Listener at localhost/42627] server.AbstractConnector(333): Started ServerConnector@50e3498a{HTTP/1.1, (http/1.1)}{0.0.0.0:36071} 2023-07-18 02:15:25,993 INFO [Listener at localhost/42627] server.Server(415): Started @44204ms 2023-07-18 02:15:25,993 INFO [Listener at localhost/42627] master.HMaster(444): hbase.rootdir=hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940, hbase.cluster.distributed=false 2023-07-18 02:15:26,006 INFO [Listener at localhost/42627] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-18 02:15:26,007 INFO [Listener at localhost/42627] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 02:15:26,007 INFO [Listener at localhost/42627] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-18 02:15:26,007 INFO [Listener at localhost/42627] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-18 02:15:26,007 INFO [Listener at localhost/42627] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 02:15:26,007 INFO [Listener at localhost/42627] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-18 02:15:26,007 INFO [Listener at localhost/42627] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-18 02:15:26,008 INFO [Listener at localhost/42627] ipc.NettyRpcServer(120): Bind to /172.31.14.131:42149 2023-07-18 02:15:26,008 INFO [Listener at localhost/42627] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-18 02:15:26,009 DEBUG [Listener at localhost/42627] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-18 02:15:26,009 INFO [Listener at localhost/42627] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 02:15:26,010 INFO [Listener at localhost/42627] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 02:15:26,011 INFO [Listener at localhost/42627] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:42149 connecting to ZooKeeper ensemble=127.0.0.1:64106 2023-07-18 02:15:26,021 DEBUG [Listener at localhost/42627-EventThread] zookeeper.ZKWatcher(600): regionserver:421490x0, quorum=127.0.0.1:64106, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 02:15:26,022 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:42149-0x101763670720001 connected 2023-07-18 02:15:26,022 DEBUG [Listener at localhost/42627] zookeeper.ZKUtil(164): regionserver:42149-0x101763670720001, quorum=127.0.0.1:64106, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-18 02:15:26,023 DEBUG [Listener at localhost/42627] zookeeper.ZKUtil(164): regionserver:42149-0x101763670720001, quorum=127.0.0.1:64106, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 02:15:26,023 DEBUG [Listener at localhost/42627] zookeeper.ZKUtil(164): regionserver:42149-0x101763670720001, quorum=127.0.0.1:64106, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-18 02:15:26,024 DEBUG [Listener at localhost/42627] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=42149 2023-07-18 02:15:26,024 DEBUG [Listener at localhost/42627] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=42149 2023-07-18 02:15:26,024 DEBUG [Listener at localhost/42627] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=42149 2023-07-18 02:15:26,024 DEBUG [Listener at localhost/42627] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=42149 2023-07-18 02:15:26,025 DEBUG [Listener at localhost/42627] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=42149 2023-07-18 02:15:26,026 INFO [Listener at localhost/42627] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-18 02:15:26,026 INFO [Listener at localhost/42627] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-18 02:15:26,027 INFO [Listener at localhost/42627] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-18 02:15:26,027 INFO [Listener at localhost/42627] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-18 02:15:26,027 INFO [Listener at localhost/42627] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-18 02:15:26,027 INFO [Listener at localhost/42627] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-18 02:15:26,027 INFO [Listener at localhost/42627] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-18 02:15:26,028 INFO [Listener at localhost/42627] http.HttpServer(1146): Jetty bound to port 42623 2023-07-18 02:15:26,028 INFO [Listener at localhost/42627] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 02:15:26,029 INFO [Listener at localhost/42627] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 02:15:26,029 INFO [Listener at localhost/42627] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@75e5d650{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/527ac157-3429-81c3-f99e-cf221451c37d/hadoop.log.dir/,AVAILABLE} 2023-07-18 02:15:26,030 INFO [Listener at localhost/42627] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 02:15:26,030 INFO [Listener at localhost/42627] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@295166fc{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-18 02:15:26,142 INFO [Listener at localhost/42627] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-18 02:15:26,143 INFO [Listener at localhost/42627] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-18 02:15:26,143 INFO [Listener at localhost/42627] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-18 02:15:26,143 INFO [Listener at localhost/42627] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-18 02:15:26,144 INFO [Listener at localhost/42627] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 02:15:26,145 INFO [Listener at localhost/42627] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@5d16009b{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/527ac157-3429-81c3-f99e-cf221451c37d/java.io.tmpdir/jetty-0_0_0_0-42623-hbase-server-2_4_18-SNAPSHOT_jar-_-any-3969757375711677977/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-18 02:15:26,146 INFO [Listener at localhost/42627] server.AbstractConnector(333): Started ServerConnector@6ba4dce4{HTTP/1.1, (http/1.1)}{0.0.0.0:42623} 2023-07-18 02:15:26,146 INFO [Listener at localhost/42627] server.Server(415): Started @44358ms 2023-07-18 02:15:26,157 INFO [Listener at localhost/42627] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-18 02:15:26,158 INFO [Listener at localhost/42627] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 02:15:26,158 INFO [Listener at localhost/42627] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-18 02:15:26,158 INFO [Listener at localhost/42627] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-18 02:15:26,158 INFO [Listener at localhost/42627] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 02:15:26,158 INFO [Listener at localhost/42627] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-18 02:15:26,158 INFO [Listener at localhost/42627] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-18 02:15:26,159 INFO [Listener at localhost/42627] ipc.NettyRpcServer(120): Bind to /172.31.14.131:46217 2023-07-18 02:15:26,159 INFO [Listener at localhost/42627] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-18 02:15:26,160 DEBUG [Listener at localhost/42627] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-18 02:15:26,161 INFO [Listener at localhost/42627] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 02:15:26,161 INFO [Listener at localhost/42627] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 02:15:26,162 INFO [Listener at localhost/42627] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:46217 connecting to ZooKeeper ensemble=127.0.0.1:64106 2023-07-18 02:15:26,166 DEBUG [Listener at localhost/42627-EventThread] zookeeper.ZKWatcher(600): regionserver:462170x0, quorum=127.0.0.1:64106, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 02:15:26,167 DEBUG [Listener at localhost/42627] zookeeper.ZKUtil(164): regionserver:462170x0, quorum=127.0.0.1:64106, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-18 02:15:26,167 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:46217-0x101763670720002 connected 2023-07-18 02:15:26,168 DEBUG [Listener at localhost/42627] zookeeper.ZKUtil(164): regionserver:46217-0x101763670720002, quorum=127.0.0.1:64106, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 02:15:26,168 DEBUG [Listener at localhost/42627] zookeeper.ZKUtil(164): regionserver:46217-0x101763670720002, quorum=127.0.0.1:64106, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-18 02:15:26,168 DEBUG [Listener at localhost/42627] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=46217 2023-07-18 02:15:26,169 DEBUG [Listener at localhost/42627] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=46217 2023-07-18 02:15:26,169 DEBUG [Listener at localhost/42627] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=46217 2023-07-18 02:15:26,170 DEBUG [Listener at localhost/42627] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=46217 2023-07-18 02:15:26,171 DEBUG [Listener at localhost/42627] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=46217 2023-07-18 02:15:26,172 INFO [Listener at localhost/42627] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-18 02:15:26,173 INFO [Listener at localhost/42627] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-18 02:15:26,173 INFO [Listener at localhost/42627] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-18 02:15:26,173 INFO [Listener at localhost/42627] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-18 02:15:26,173 INFO [Listener at localhost/42627] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-18 02:15:26,174 INFO [Listener at localhost/42627] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-18 02:15:26,174 INFO [Listener at localhost/42627] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-18 02:15:26,174 INFO [Listener at localhost/42627] http.HttpServer(1146): Jetty bound to port 39899 2023-07-18 02:15:26,175 INFO [Listener at localhost/42627] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 02:15:26,179 INFO [Listener at localhost/42627] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 02:15:26,179 INFO [Listener at localhost/42627] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@288689f8{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/527ac157-3429-81c3-f99e-cf221451c37d/hadoop.log.dir/,AVAILABLE} 2023-07-18 02:15:26,179 INFO [Listener at localhost/42627] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 02:15:26,179 INFO [Listener at localhost/42627] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@42cd0009{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-18 02:15:26,291 INFO [Listener at localhost/42627] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-18 02:15:26,291 INFO [Listener at localhost/42627] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-18 02:15:26,291 INFO [Listener at localhost/42627] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-18 02:15:26,292 INFO [Listener at localhost/42627] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-18 02:15:26,292 INFO [Listener at localhost/42627] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 02:15:26,293 INFO [Listener at localhost/42627] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@1f4b851f{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/527ac157-3429-81c3-f99e-cf221451c37d/java.io.tmpdir/jetty-0_0_0_0-39899-hbase-server-2_4_18-SNAPSHOT_jar-_-any-2698000493836841136/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-18 02:15:26,295 INFO [Listener at localhost/42627] server.AbstractConnector(333): Started ServerConnector@6d04da20{HTTP/1.1, (http/1.1)}{0.0.0.0:39899} 2023-07-18 02:15:26,295 INFO [Listener at localhost/42627] server.Server(415): Started @44507ms 2023-07-18 02:15:26,307 INFO [Listener at localhost/42627] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-18 02:15:26,307 INFO [Listener at localhost/42627] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 02:15:26,307 INFO [Listener at localhost/42627] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-18 02:15:26,307 INFO [Listener at localhost/42627] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-18 02:15:26,307 INFO [Listener at localhost/42627] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 02:15:26,307 INFO [Listener at localhost/42627] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-18 02:15:26,307 INFO [Listener at localhost/42627] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-18 02:15:26,308 INFO [Listener at localhost/42627] ipc.NettyRpcServer(120): Bind to /172.31.14.131:32775 2023-07-18 02:15:26,308 INFO [Listener at localhost/42627] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-18 02:15:26,311 DEBUG [Listener at localhost/42627] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-18 02:15:26,312 INFO [Listener at localhost/42627] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 02:15:26,313 INFO [Listener at localhost/42627] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 02:15:26,313 INFO [Listener at localhost/42627] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:32775 connecting to ZooKeeper ensemble=127.0.0.1:64106 2023-07-18 02:15:26,317 DEBUG [Listener at localhost/42627-EventThread] zookeeper.ZKWatcher(600): regionserver:327750x0, quorum=127.0.0.1:64106, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 02:15:26,318 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:32775-0x101763670720003 connected 2023-07-18 02:15:26,318 DEBUG [Listener at localhost/42627] zookeeper.ZKUtil(164): regionserver:32775-0x101763670720003, quorum=127.0.0.1:64106, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-18 02:15:26,319 DEBUG [Listener at localhost/42627] zookeeper.ZKUtil(164): regionserver:32775-0x101763670720003, quorum=127.0.0.1:64106, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 02:15:26,319 DEBUG [Listener at localhost/42627] zookeeper.ZKUtil(164): regionserver:32775-0x101763670720003, quorum=127.0.0.1:64106, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-18 02:15:26,319 DEBUG [Listener at localhost/42627] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=32775 2023-07-18 02:15:26,320 DEBUG [Listener at localhost/42627] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=32775 2023-07-18 02:15:26,320 DEBUG [Listener at localhost/42627] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=32775 2023-07-18 02:15:26,323 DEBUG [Listener at localhost/42627] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=32775 2023-07-18 02:15:26,323 DEBUG [Listener at localhost/42627] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=32775 2023-07-18 02:15:26,325 INFO [Listener at localhost/42627] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-18 02:15:26,325 INFO [Listener at localhost/42627] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-18 02:15:26,325 INFO [Listener at localhost/42627] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-18 02:15:26,325 INFO [Listener at localhost/42627] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-18 02:15:26,325 INFO [Listener at localhost/42627] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-18 02:15:26,325 INFO [Listener at localhost/42627] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-18 02:15:26,326 INFO [Listener at localhost/42627] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-18 02:15:26,326 INFO [Listener at localhost/42627] http.HttpServer(1146): Jetty bound to port 46593 2023-07-18 02:15:26,326 INFO [Listener at localhost/42627] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 02:15:26,331 INFO [Listener at localhost/42627] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 02:15:26,331 INFO [Listener at localhost/42627] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@51b99b82{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/527ac157-3429-81c3-f99e-cf221451c37d/hadoop.log.dir/,AVAILABLE} 2023-07-18 02:15:26,332 INFO [Listener at localhost/42627] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 02:15:26,332 INFO [Listener at localhost/42627] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@736db136{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-18 02:15:26,445 INFO [Listener at localhost/42627] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-18 02:15:26,445 INFO [Listener at localhost/42627] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-18 02:15:26,446 INFO [Listener at localhost/42627] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-18 02:15:26,446 INFO [Listener at localhost/42627] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-18 02:15:26,447 INFO [Listener at localhost/42627] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 02:15:26,447 INFO [Listener at localhost/42627] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@72c3dca2{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/527ac157-3429-81c3-f99e-cf221451c37d/java.io.tmpdir/jetty-0_0_0_0-46593-hbase-server-2_4_18-SNAPSHOT_jar-_-any-5603094831319309334/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-18 02:15:26,449 INFO [Listener at localhost/42627] server.AbstractConnector(333): Started ServerConnector@72f6fde0{HTTP/1.1, (http/1.1)}{0.0.0.0:46593} 2023-07-18 02:15:26,450 INFO [Listener at localhost/42627] server.Server(415): Started @44661ms 2023-07-18 02:15:26,452 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 02:15:26,460 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@26b927f9{HTTP/1.1, (http/1.1)}{0.0.0.0:34909} 2023-07-18 02:15:26,461 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @44672ms 2023-07-18 02:15:26,461 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,34701,1689646525826 2023-07-18 02:15:26,462 DEBUG [Listener at localhost/42627-EventThread] zookeeper.ZKWatcher(600): master:34701-0x101763670720000, quorum=127.0.0.1:64106, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-18 02:15:26,462 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:34701-0x101763670720000, quorum=127.0.0.1:64106, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,34701,1689646525826 2023-07-18 02:15:26,463 DEBUG [Listener at localhost/42627-EventThread] zookeeper.ZKWatcher(600): master:34701-0x101763670720000, quorum=127.0.0.1:64106, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-18 02:15:26,463 DEBUG [Listener at localhost/42627-EventThread] zookeeper.ZKWatcher(600): regionserver:46217-0x101763670720002, quorum=127.0.0.1:64106, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-18 02:15:26,463 DEBUG [Listener at localhost/42627-EventThread] zookeeper.ZKWatcher(600): regionserver:42149-0x101763670720001, quorum=127.0.0.1:64106, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-18 02:15:26,463 DEBUG [Listener at localhost/42627-EventThread] zookeeper.ZKWatcher(600): master:34701-0x101763670720000, quorum=127.0.0.1:64106, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 02:15:26,463 DEBUG [Listener at localhost/42627-EventThread] zookeeper.ZKWatcher(600): regionserver:32775-0x101763670720003, quorum=127.0.0.1:64106, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-18 02:15:26,465 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:34701-0x101763670720000, quorum=127.0.0.1:64106, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-18 02:15:26,467 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,34701,1689646525826 from backup master directory 2023-07-18 02:15:26,467 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:34701-0x101763670720000, quorum=127.0.0.1:64106, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-18 02:15:26,468 DEBUG [Listener at localhost/42627-EventThread] zookeeper.ZKWatcher(600): master:34701-0x101763670720000, quorum=127.0.0.1:64106, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,34701,1689646525826 2023-07-18 02:15:26,468 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-18 02:15:26,468 DEBUG [Listener at localhost/42627-EventThread] zookeeper.ZKWatcher(600): master:34701-0x101763670720000, quorum=127.0.0.1:64106, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-18 02:15:26,468 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,34701,1689646525826 2023-07-18 02:15:26,486 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/hbase.id with ID: 19d49dd7-0c61-44f8-81d7-41a3754fa79f 2023-07-18 02:15:26,498 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 02:15:26,501 DEBUG [Listener at localhost/42627-EventThread] zookeeper.ZKWatcher(600): master:34701-0x101763670720000, quorum=127.0.0.1:64106, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 02:15:26,515 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x127c9385 to 127.0.0.1:64106 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 02:15:26,521 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6afe6a31, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 02:15:26,522 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 02:15:26,522 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-18 02:15:26,522 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 02:15:26,524 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/MasterData/data/master/store-tmp 2023-07-18 02:15:26,537 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 02:15:26,537 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-18 02:15:26,537 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 02:15:26,537 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 02:15:26,537 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-18 02:15:26,537 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 02:15:26,537 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 02:15:26,537 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-18 02:15:26,538 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/MasterData/WALs/jenkins-hbase4.apache.org,34701,1689646525826 2023-07-18 02:15:26,540 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C34701%2C1689646525826, suffix=, logDir=hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/MasterData/WALs/jenkins-hbase4.apache.org,34701,1689646525826, archiveDir=hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/MasterData/oldWALs, maxLogs=10 2023-07-18 02:15:26,555 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35061,DS-c23070b9-4420-4118-9f01-dcf5c111c9ec,DISK] 2023-07-18 02:15:26,558 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37759,DS-3f9faa3e-7a97-475a-a68f-1dd43adc8a7e,DISK] 2023-07-18 02:15:26,556 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39839,DS-1de6045e-347e-487b-a9d9-61a01cb59513,DISK] 2023-07-18 02:15:26,563 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/MasterData/WALs/jenkins-hbase4.apache.org,34701,1689646525826/jenkins-hbase4.apache.org%2C34701%2C1689646525826.1689646526541 2023-07-18 02:15:26,563 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35061,DS-c23070b9-4420-4118-9f01-dcf5c111c9ec,DISK], DatanodeInfoWithStorage[127.0.0.1:37759,DS-3f9faa3e-7a97-475a-a68f-1dd43adc8a7e,DISK], DatanodeInfoWithStorage[127.0.0.1:39839,DS-1de6045e-347e-487b-a9d9-61a01cb59513,DISK]] 2023-07-18 02:15:26,564 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-18 02:15:26,564 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 02:15:26,564 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-18 02:15:26,564 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-18 02:15:26,566 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-18 02:15:26,567 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-18 02:15:26,568 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-18 02:15:26,568 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 02:15:26,569 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-18 02:15:26,570 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-18 02:15:26,573 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-18 02:15:26,577 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 02:15:26,578 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9838021120, jitterRate=-0.08376288414001465}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 02:15:26,578 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-18 02:15:26,578 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-18 02:15:26,580 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-18 02:15:26,580 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-18 02:15:26,580 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-18 02:15:26,581 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-18 02:15:26,581 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-07-18 02:15:26,581 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-18 02:15:26,582 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-18 02:15:26,583 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-18 02:15:26,583 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34701-0x101763670720000, quorum=127.0.0.1:64106, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-18 02:15:26,584 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-18 02:15:26,584 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34701-0x101763670720000, quorum=127.0.0.1:64106, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-18 02:15:26,586 DEBUG [Listener at localhost/42627-EventThread] zookeeper.ZKWatcher(600): master:34701-0x101763670720000, quorum=127.0.0.1:64106, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 02:15:26,586 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34701-0x101763670720000, quorum=127.0.0.1:64106, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-18 02:15:26,587 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34701-0x101763670720000, quorum=127.0.0.1:64106, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-18 02:15:26,587 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34701-0x101763670720000, quorum=127.0.0.1:64106, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-18 02:15:26,589 DEBUG [Listener at localhost/42627-EventThread] zookeeper.ZKWatcher(600): regionserver:32775-0x101763670720003, quorum=127.0.0.1:64106, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-18 02:15:26,589 DEBUG [Listener at localhost/42627-EventThread] zookeeper.ZKWatcher(600): regionserver:42149-0x101763670720001, quorum=127.0.0.1:64106, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-18 02:15:26,589 DEBUG [Listener at localhost/42627-EventThread] zookeeper.ZKWatcher(600): master:34701-0x101763670720000, quorum=127.0.0.1:64106, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-18 02:15:26,589 DEBUG [Listener at localhost/42627-EventThread] zookeeper.ZKWatcher(600): master:34701-0x101763670720000, quorum=127.0.0.1:64106, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 02:15:26,589 DEBUG [Listener at localhost/42627-EventThread] zookeeper.ZKWatcher(600): regionserver:46217-0x101763670720002, quorum=127.0.0.1:64106, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-18 02:15:26,589 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,34701,1689646525826, sessionid=0x101763670720000, setting cluster-up flag (Was=false) 2023-07-18 02:15:26,595 DEBUG [Listener at localhost/42627-EventThread] zookeeper.ZKWatcher(600): master:34701-0x101763670720000, quorum=127.0.0.1:64106, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 02:15:26,600 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-18 02:15:26,601 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,34701,1689646525826 2023-07-18 02:15:26,603 DEBUG [Listener at localhost/42627-EventThread] zookeeper.ZKWatcher(600): master:34701-0x101763670720000, quorum=127.0.0.1:64106, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 02:15:26,607 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-18 02:15:26,608 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,34701,1689646525826 2023-07-18 02:15:26,609 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/.hbase-snapshot/.tmp 2023-07-18 02:15:26,609 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-18 02:15:26,610 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-18 02:15:26,610 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-18 02:15:26,611 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34701,1689646525826] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-18 02:15:26,611 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-18 02:15:26,612 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-18 02:15:26,625 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-18 02:15:26,625 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-18 02:15:26,625 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-18 02:15:26,625 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-18 02:15:26,625 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-18 02:15:26,625 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-18 02:15:26,625 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-18 02:15:26,625 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-18 02:15:26,625 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-18 02:15:26,625 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:15:26,626 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-18 02:15:26,626 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:15:26,628 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689646556628 2023-07-18 02:15:26,628 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-18 02:15:26,628 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-18 02:15:26,628 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-18 02:15:26,628 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-18 02:15:26,628 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-18 02:15:26,628 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-18 02:15:26,628 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-18 02:15:26,629 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-18 02:15:26,629 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-18 02:15:26,629 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-18 02:15:26,629 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-18 02:15:26,629 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-18 02:15:26,630 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-18 02:15:26,631 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-18 02:15:26,631 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-18 02:15:26,635 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689646526631,5,FailOnTimeoutGroup] 2023-07-18 02:15:26,639 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689646526638,5,FailOnTimeoutGroup] 2023-07-18 02:15:26,639 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-18 02:15:26,639 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-18 02:15:26,639 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-18 02:15:26,639 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-18 02:15:26,648 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-18 02:15:26,648 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-18 02:15:26,649 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940 2023-07-18 02:15:26,651 INFO [RS:0;jenkins-hbase4:42149] regionserver.HRegionServer(951): ClusterId : 19d49dd7-0c61-44f8-81d7-41a3754fa79f 2023-07-18 02:15:26,651 INFO [RS:1;jenkins-hbase4:46217] regionserver.HRegionServer(951): ClusterId : 19d49dd7-0c61-44f8-81d7-41a3754fa79f 2023-07-18 02:15:26,658 DEBUG [RS:1;jenkins-hbase4:46217] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-18 02:15:26,658 DEBUG [RS:0;jenkins-hbase4:42149] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-18 02:15:26,661 INFO [RS:2;jenkins-hbase4:32775] regionserver.HRegionServer(951): ClusterId : 19d49dd7-0c61-44f8-81d7-41a3754fa79f 2023-07-18 02:15:26,661 DEBUG [RS:2;jenkins-hbase4:32775] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-18 02:15:26,661 DEBUG [RS:1;jenkins-hbase4:46217] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-18 02:15:26,661 DEBUG [RS:1;jenkins-hbase4:46217] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-18 02:15:26,663 DEBUG [RS:0;jenkins-hbase4:42149] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-18 02:15:26,663 DEBUG [RS:0;jenkins-hbase4:42149] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-18 02:15:26,665 DEBUG [RS:2;jenkins-hbase4:32775] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-18 02:15:26,665 DEBUG [RS:2;jenkins-hbase4:32775] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-18 02:15:26,665 DEBUG [RS:1;jenkins-hbase4:46217] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-18 02:15:26,667 DEBUG [RS:1;jenkins-hbase4:46217] zookeeper.ReadOnlyZKClient(139): Connect 0x561f31ec to 127.0.0.1:64106 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 02:15:26,667 DEBUG [RS:0;jenkins-hbase4:42149] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-18 02:15:26,670 DEBUG [RS:2;jenkins-hbase4:32775] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-18 02:15:26,671 DEBUG [RS:0;jenkins-hbase4:42149] zookeeper.ReadOnlyZKClient(139): Connect 0x3a22d183 to 127.0.0.1:64106 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 02:15:26,672 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 02:15:26,674 DEBUG [RS:2;jenkins-hbase4:32775] zookeeper.ReadOnlyZKClient(139): Connect 0x4edb461d to 127.0.0.1:64106 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 02:15:26,682 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-18 02:15:26,688 DEBUG [RS:1;jenkins-hbase4:46217] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@58d8d55f, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 02:15:26,689 DEBUG [RS:1;jenkins-hbase4:46217] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5ce75a9c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-18 02:15:26,691 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/data/hbase/meta/1588230740/info 2023-07-18 02:15:26,691 DEBUG [RS:0;jenkins-hbase4:42149] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@e15f607, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 02:15:26,692 DEBUG [RS:0;jenkins-hbase4:42149] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@bd22e66, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-18 02:15:26,692 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-18 02:15:26,693 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 02:15:26,693 DEBUG [RS:2;jenkins-hbase4:32775] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@401ca1b6, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 02:15:26,693 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-18 02:15:26,693 DEBUG [RS:2;jenkins-hbase4:32775] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@44232d6d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-18 02:15:26,695 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/data/hbase/meta/1588230740/rep_barrier 2023-07-18 02:15:26,695 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-18 02:15:26,696 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 02:15:26,696 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-18 02:15:26,697 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/data/hbase/meta/1588230740/table 2023-07-18 02:15:26,698 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-18 02:15:26,698 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 02:15:26,699 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/data/hbase/meta/1588230740 2023-07-18 02:15:26,699 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/data/hbase/meta/1588230740 2023-07-18 02:15:26,702 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-18 02:15:26,702 DEBUG [RS:2;jenkins-hbase4:32775] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:32775 2023-07-18 02:15:26,702 DEBUG [RS:0;jenkins-hbase4:42149] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:42149 2023-07-18 02:15:26,702 INFO [RS:2;jenkins-hbase4:32775] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-18 02:15:26,702 INFO [RS:0;jenkins-hbase4:42149] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-18 02:15:26,702 INFO [RS:0;jenkins-hbase4:42149] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-18 02:15:26,702 INFO [RS:2;jenkins-hbase4:32775] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-18 02:15:26,702 DEBUG [RS:0;jenkins-hbase4:42149] regionserver.HRegionServer(1022): About to register with Master. 2023-07-18 02:15:26,702 DEBUG [RS:2;jenkins-hbase4:32775] regionserver.HRegionServer(1022): About to register with Master. 2023-07-18 02:15:26,703 INFO [RS:0;jenkins-hbase4:42149] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,34701,1689646525826 with isa=jenkins-hbase4.apache.org/172.31.14.131:42149, startcode=1689646526006 2023-07-18 02:15:26,703 INFO [RS:2;jenkins-hbase4:32775] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,34701,1689646525826 with isa=jenkins-hbase4.apache.org/172.31.14.131:32775, startcode=1689646526307 2023-07-18 02:15:26,703 DEBUG [RS:0;jenkins-hbase4:42149] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-18 02:15:26,703 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-18 02:15:26,703 DEBUG [RS:2;jenkins-hbase4:32775] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-18 02:15:26,704 DEBUG [RS:1;jenkins-hbase4:46217] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:46217 2023-07-18 02:15:26,704 INFO [RS:1;jenkins-hbase4:46217] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-18 02:15:26,704 INFO [RS:1;jenkins-hbase4:46217] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-18 02:15:26,704 DEBUG [RS:1;jenkins-hbase4:46217] regionserver.HRegionServer(1022): About to register with Master. 2023-07-18 02:15:26,704 INFO [RS:1;jenkins-hbase4:46217] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,34701,1689646525826 with isa=jenkins-hbase4.apache.org/172.31.14.131:46217, startcode=1689646526157 2023-07-18 02:15:26,704 DEBUG [RS:1;jenkins-hbase4:46217] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-18 02:15:26,707 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:60773, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.7 (auth:SIMPLE), service=RegionServerStatusService 2023-07-18 02:15:26,707 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 02:15:26,707 INFO [RS-EventLoopGroup-12-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:33309, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.8 (auth:SIMPLE), service=RegionServerStatusService 2023-07-18 02:15:26,709 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34701] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,42149,1689646526006 2023-07-18 02:15:26,709 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34701,1689646525826] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-18 02:15:26,710 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10473062720, jitterRate=-0.024620026350021362}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-18 02:15:26,710 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34701,1689646525826] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-18 02:15:26,710 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-18 02:15:26,710 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34701] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,46217,1689646526157 2023-07-18 02:15:26,710 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:57393, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.9 (auth:SIMPLE), service=RegionServerStatusService 2023-07-18 02:15:26,710 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34701,1689646525826] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-18 02:15:26,710 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-18 02:15:26,710 DEBUG [RS:0;jenkins-hbase4:42149] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940 2023-07-18 02:15:26,710 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-18 02:15:26,710 DEBUG [RS:1;jenkins-hbase4:46217] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940 2023-07-18 02:15:26,710 DEBUG [RS:0;jenkins-hbase4:42149] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:41331 2023-07-18 02:15:26,710 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34701] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,32775,1689646526307 2023-07-18 02:15:26,710 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34701,1689646525826] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-18 02:15:26,710 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34701,1689646525826] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-18 02:15:26,710 DEBUG [RS:0;jenkins-hbase4:42149] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=36071 2023-07-18 02:15:26,710 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34701,1689646525826] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-18 02:15:26,710 DEBUG [RS:1;jenkins-hbase4:46217] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:41331 2023-07-18 02:15:26,710 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-18 02:15:26,711 DEBUG [RS:1;jenkins-hbase4:46217] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=36071 2023-07-18 02:15:26,711 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-18 02:15:26,711 DEBUG [RS:2;jenkins-hbase4:32775] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940 2023-07-18 02:15:26,711 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-18 02:15:26,711 DEBUG [RS:2;jenkins-hbase4:32775] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:41331 2023-07-18 02:15:26,711 DEBUG [RS:2;jenkins-hbase4:32775] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=36071 2023-07-18 02:15:26,711 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-18 02:15:26,711 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-18 02:15:26,712 DEBUG [Listener at localhost/42627-EventThread] zookeeper.ZKWatcher(600): master:34701-0x101763670720000, quorum=127.0.0.1:64106, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 02:15:26,712 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-18 02:15:26,712 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-18 02:15:26,712 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-18 02:15:26,713 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-18 02:15:26,716 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-18 02:15:26,717 DEBUG [RS:0;jenkins-hbase4:42149] zookeeper.ZKUtil(162): regionserver:42149-0x101763670720001, quorum=127.0.0.1:64106, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42149,1689646526006 2023-07-18 02:15:26,717 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,32775,1689646526307] 2023-07-18 02:15:26,717 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,42149,1689646526006] 2023-07-18 02:15:26,717 WARN [RS:0;jenkins-hbase4:42149] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-18 02:15:26,718 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,46217,1689646526157] 2023-07-18 02:15:26,718 INFO [RS:0;jenkins-hbase4:42149] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 02:15:26,718 DEBUG [RS:1;jenkins-hbase4:46217] zookeeper.ZKUtil(162): regionserver:46217-0x101763670720002, quorum=127.0.0.1:64106, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46217,1689646526157 2023-07-18 02:15:26,718 DEBUG [RS:0;jenkins-hbase4:42149] regionserver.HRegionServer(1948): logDir=hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/WALs/jenkins-hbase4.apache.org,42149,1689646526006 2023-07-18 02:15:26,718 WARN [RS:1;jenkins-hbase4:46217] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-18 02:15:26,718 DEBUG [RS:2;jenkins-hbase4:32775] zookeeper.ZKUtil(162): regionserver:32775-0x101763670720003, quorum=127.0.0.1:64106, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,32775,1689646526307 2023-07-18 02:15:26,718 INFO [RS:1;jenkins-hbase4:46217] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 02:15:26,718 WARN [RS:2;jenkins-hbase4:32775] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-18 02:15:26,718 INFO [RS:2;jenkins-hbase4:32775] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 02:15:26,718 DEBUG [RS:1;jenkins-hbase4:46217] regionserver.HRegionServer(1948): logDir=hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/WALs/jenkins-hbase4.apache.org,46217,1689646526157 2023-07-18 02:15:26,718 DEBUG [RS:2;jenkins-hbase4:32775] regionserver.HRegionServer(1948): logDir=hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/WALs/jenkins-hbase4.apache.org,32775,1689646526307 2023-07-18 02:15:26,726 DEBUG [RS:0;jenkins-hbase4:42149] zookeeper.ZKUtil(162): regionserver:42149-0x101763670720001, quorum=127.0.0.1:64106, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,32775,1689646526307 2023-07-18 02:15:26,727 DEBUG [RS:0;jenkins-hbase4:42149] zookeeper.ZKUtil(162): regionserver:42149-0x101763670720001, quorum=127.0.0.1:64106, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42149,1689646526006 2023-07-18 02:15:26,727 DEBUG [RS:2;jenkins-hbase4:32775] zookeeper.ZKUtil(162): regionserver:32775-0x101763670720003, quorum=127.0.0.1:64106, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,32775,1689646526307 2023-07-18 02:15:26,727 DEBUG [RS:1;jenkins-hbase4:46217] zookeeper.ZKUtil(162): regionserver:46217-0x101763670720002, quorum=127.0.0.1:64106, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,32775,1689646526307 2023-07-18 02:15:26,727 DEBUG [RS:0;jenkins-hbase4:42149] zookeeper.ZKUtil(162): regionserver:42149-0x101763670720001, quorum=127.0.0.1:64106, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46217,1689646526157 2023-07-18 02:15:26,727 DEBUG [RS:2;jenkins-hbase4:32775] zookeeper.ZKUtil(162): regionserver:32775-0x101763670720003, quorum=127.0.0.1:64106, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42149,1689646526006 2023-07-18 02:15:26,727 DEBUG [RS:1;jenkins-hbase4:46217] zookeeper.ZKUtil(162): regionserver:46217-0x101763670720002, quorum=127.0.0.1:64106, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42149,1689646526006 2023-07-18 02:15:26,727 DEBUG [RS:2;jenkins-hbase4:32775] zookeeper.ZKUtil(162): regionserver:32775-0x101763670720003, quorum=127.0.0.1:64106, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46217,1689646526157 2023-07-18 02:15:26,727 DEBUG [RS:1;jenkins-hbase4:46217] zookeeper.ZKUtil(162): regionserver:46217-0x101763670720002, quorum=127.0.0.1:64106, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46217,1689646526157 2023-07-18 02:15:26,728 DEBUG [RS:0;jenkins-hbase4:42149] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-18 02:15:26,728 DEBUG [RS:2;jenkins-hbase4:32775] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-18 02:15:26,728 DEBUG [RS:1;jenkins-hbase4:46217] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-18 02:15:26,728 INFO [RS:0;jenkins-hbase4:42149] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-18 02:15:26,728 INFO [RS:2;jenkins-hbase4:32775] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-18 02:15:26,728 INFO [RS:1;jenkins-hbase4:46217] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-18 02:15:26,730 INFO [RS:0;jenkins-hbase4:42149] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-18 02:15:26,731 INFO [RS:0;jenkins-hbase4:42149] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-18 02:15:26,731 INFO [RS:0;jenkins-hbase4:42149] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 02:15:26,734 INFO [RS:1;jenkins-hbase4:46217] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-18 02:15:26,734 INFO [RS:2;jenkins-hbase4:32775] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-18 02:15:26,734 INFO [RS:0;jenkins-hbase4:42149] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-18 02:15:26,735 INFO [RS:1;jenkins-hbase4:46217] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-18 02:15:26,735 INFO [RS:1;jenkins-hbase4:46217] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 02:15:26,735 INFO [RS:2;jenkins-hbase4:32775] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-18 02:15:26,735 INFO [RS:2;jenkins-hbase4:32775] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 02:15:26,736 INFO [RS:1;jenkins-hbase4:46217] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-18 02:15:26,736 INFO [RS:2;jenkins-hbase4:32775] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-18 02:15:26,737 INFO [RS:0;jenkins-hbase4:42149] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-18 02:15:26,738 DEBUG [RS:0;jenkins-hbase4:42149] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:15:26,738 INFO [RS:1;jenkins-hbase4:46217] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-18 02:15:26,738 DEBUG [RS:0;jenkins-hbase4:42149] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:15:26,738 INFO [RS:2;jenkins-hbase4:32775] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-18 02:15:26,738 DEBUG [RS:0;jenkins-hbase4:42149] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:15:26,739 DEBUG [RS:2;jenkins-hbase4:32775] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:15:26,739 DEBUG [RS:0;jenkins-hbase4:42149] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:15:26,738 DEBUG [RS:1;jenkins-hbase4:46217] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:15:26,739 DEBUG [RS:0;jenkins-hbase4:42149] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:15:26,739 DEBUG [RS:1;jenkins-hbase4:46217] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:15:26,739 DEBUG [RS:2;jenkins-hbase4:32775] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:15:26,739 DEBUG [RS:0;jenkins-hbase4:42149] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-18 02:15:26,739 DEBUG [RS:2;jenkins-hbase4:32775] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:15:26,739 DEBUG [RS:0;jenkins-hbase4:42149] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:15:26,739 DEBUG [RS:1;jenkins-hbase4:46217] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:15:26,739 DEBUG [RS:0;jenkins-hbase4:42149] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:15:26,740 DEBUG [RS:1;jenkins-hbase4:46217] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:15:26,740 DEBUG [RS:0;jenkins-hbase4:42149] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:15:26,740 DEBUG [RS:1;jenkins-hbase4:46217] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:15:26,739 DEBUG [RS:2;jenkins-hbase4:32775] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:15:26,740 DEBUG [RS:1;jenkins-hbase4:46217] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-18 02:15:26,740 DEBUG [RS:2;jenkins-hbase4:32775] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:15:26,740 DEBUG [RS:1;jenkins-hbase4:46217] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:15:26,740 DEBUG [RS:0;jenkins-hbase4:42149] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:15:26,740 DEBUG [RS:1;jenkins-hbase4:46217] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:15:26,740 DEBUG [RS:2;jenkins-hbase4:32775] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-18 02:15:26,740 DEBUG [RS:1;jenkins-hbase4:46217] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:15:26,740 DEBUG [RS:2;jenkins-hbase4:32775] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:15:26,740 DEBUG [RS:1;jenkins-hbase4:46217] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:15:26,740 DEBUG [RS:2;jenkins-hbase4:32775] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:15:26,740 DEBUG [RS:2;jenkins-hbase4:32775] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:15:26,740 DEBUG [RS:2;jenkins-hbase4:32775] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:15:26,746 INFO [RS:0;jenkins-hbase4:42149] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 02:15:26,746 INFO [RS:0;jenkins-hbase4:42149] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 02:15:26,746 INFO [RS:2;jenkins-hbase4:32775] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 02:15:26,746 INFO [RS:0;jenkins-hbase4:42149] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-18 02:15:26,746 INFO [RS:2;jenkins-hbase4:32775] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 02:15:26,746 INFO [RS:2;jenkins-hbase4:32775] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-18 02:15:26,746 INFO [RS:1;jenkins-hbase4:46217] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 02:15:26,747 INFO [RS:1;jenkins-hbase4:46217] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 02:15:26,747 INFO [RS:1;jenkins-hbase4:46217] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-18 02:15:26,758 INFO [RS:0;jenkins-hbase4:42149] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-18 02:15:26,758 INFO [RS:1;jenkins-hbase4:46217] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-18 02:15:26,758 INFO [RS:0;jenkins-hbase4:42149] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,42149,1689646526006-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 02:15:26,758 INFO [RS:1;jenkins-hbase4:46217] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,46217,1689646526157-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 02:15:26,766 INFO [RS:2;jenkins-hbase4:32775] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-18 02:15:26,766 INFO [RS:2;jenkins-hbase4:32775] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,32775,1689646526307-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 02:15:26,783 INFO [RS:1;jenkins-hbase4:46217] regionserver.Replication(203): jenkins-hbase4.apache.org,46217,1689646526157 started 2023-07-18 02:15:26,783 INFO [RS:1;jenkins-hbase4:46217] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,46217,1689646526157, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:46217, sessionid=0x101763670720002 2023-07-18 02:15:26,783 INFO [RS:0;jenkins-hbase4:42149] regionserver.Replication(203): jenkins-hbase4.apache.org,42149,1689646526006 started 2023-07-18 02:15:26,783 DEBUG [RS:1;jenkins-hbase4:46217] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-18 02:15:26,783 INFO [RS:0;jenkins-hbase4:42149] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,42149,1689646526006, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:42149, sessionid=0x101763670720001 2023-07-18 02:15:26,783 DEBUG [RS:1;jenkins-hbase4:46217] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,46217,1689646526157 2023-07-18 02:15:26,783 DEBUG [RS:1;jenkins-hbase4:46217] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,46217,1689646526157' 2023-07-18 02:15:26,783 DEBUG [RS:0;jenkins-hbase4:42149] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-18 02:15:26,783 DEBUG [RS:0;jenkins-hbase4:42149] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,42149,1689646526006 2023-07-18 02:15:26,783 DEBUG [RS:0;jenkins-hbase4:42149] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,42149,1689646526006' 2023-07-18 02:15:26,783 DEBUG [RS:0;jenkins-hbase4:42149] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-18 02:15:26,783 DEBUG [RS:1;jenkins-hbase4:46217] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-18 02:15:26,784 DEBUG [RS:0;jenkins-hbase4:42149] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-18 02:15:26,784 DEBUG [RS:1;jenkins-hbase4:46217] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-18 02:15:26,784 DEBUG [RS:0;jenkins-hbase4:42149] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-18 02:15:26,784 DEBUG [RS:0;jenkins-hbase4:42149] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-18 02:15:26,784 DEBUG [RS:0;jenkins-hbase4:42149] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,42149,1689646526006 2023-07-18 02:15:26,784 DEBUG [RS:0;jenkins-hbase4:42149] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,42149,1689646526006' 2023-07-18 02:15:26,784 DEBUG [RS:0;jenkins-hbase4:42149] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-18 02:15:26,784 DEBUG [RS:1;jenkins-hbase4:46217] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-18 02:15:26,785 DEBUG [RS:1;jenkins-hbase4:46217] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-18 02:15:26,785 DEBUG [RS:1;jenkins-hbase4:46217] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,46217,1689646526157 2023-07-18 02:15:26,785 DEBUG [RS:1;jenkins-hbase4:46217] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,46217,1689646526157' 2023-07-18 02:15:26,785 DEBUG [RS:1;jenkins-hbase4:46217] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-18 02:15:26,785 DEBUG [RS:0;jenkins-hbase4:42149] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-18 02:15:26,785 DEBUG [RS:1;jenkins-hbase4:46217] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-18 02:15:26,785 DEBUG [RS:0;jenkins-hbase4:42149] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-18 02:15:26,785 INFO [RS:0;jenkins-hbase4:42149] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-18 02:15:26,785 INFO [RS:0;jenkins-hbase4:42149] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-18 02:15:26,785 DEBUG [RS:1;jenkins-hbase4:46217] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-18 02:15:26,785 INFO [RS:1;jenkins-hbase4:46217] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-18 02:15:26,785 INFO [RS:1;jenkins-hbase4:46217] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-18 02:15:26,792 INFO [RS:2;jenkins-hbase4:32775] regionserver.Replication(203): jenkins-hbase4.apache.org,32775,1689646526307 started 2023-07-18 02:15:26,792 INFO [RS:2;jenkins-hbase4:32775] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,32775,1689646526307, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:32775, sessionid=0x101763670720003 2023-07-18 02:15:26,792 DEBUG [RS:2;jenkins-hbase4:32775] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-18 02:15:26,792 DEBUG [RS:2;jenkins-hbase4:32775] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,32775,1689646526307 2023-07-18 02:15:26,792 DEBUG [RS:2;jenkins-hbase4:32775] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,32775,1689646526307' 2023-07-18 02:15:26,792 DEBUG [RS:2;jenkins-hbase4:32775] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-18 02:15:26,793 DEBUG [RS:2;jenkins-hbase4:32775] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-18 02:15:26,793 DEBUG [RS:2;jenkins-hbase4:32775] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-18 02:15:26,793 DEBUG [RS:2;jenkins-hbase4:32775] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-18 02:15:26,793 DEBUG [RS:2;jenkins-hbase4:32775] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,32775,1689646526307 2023-07-18 02:15:26,793 DEBUG [RS:2;jenkins-hbase4:32775] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,32775,1689646526307' 2023-07-18 02:15:26,793 DEBUG [RS:2;jenkins-hbase4:32775] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-18 02:15:26,793 DEBUG [RS:2;jenkins-hbase4:32775] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-18 02:15:26,794 DEBUG [RS:2;jenkins-hbase4:32775] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-18 02:15:26,794 INFO [RS:2;jenkins-hbase4:32775] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-18 02:15:26,794 INFO [RS:2;jenkins-hbase4:32775] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-18 02:15:26,866 DEBUG [jenkins-hbase4:34701] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-18 02:15:26,867 DEBUG [jenkins-hbase4:34701] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 02:15:26,867 DEBUG [jenkins-hbase4:34701] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 02:15:26,867 DEBUG [jenkins-hbase4:34701] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 02:15:26,867 DEBUG [jenkins-hbase4:34701] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 02:15:26,867 DEBUG [jenkins-hbase4:34701] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 02:15:26,868 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,42149,1689646526006, state=OPENING 2023-07-18 02:15:26,870 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-18 02:15:26,871 DEBUG [Listener at localhost/42627-EventThread] zookeeper.ZKWatcher(600): master:34701-0x101763670720000, quorum=127.0.0.1:64106, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 02:15:26,871 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-18 02:15:26,871 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,42149,1689646526006}] 2023-07-18 02:15:26,887 INFO [RS:0;jenkins-hbase4:42149] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C42149%2C1689646526006, suffix=, logDir=hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/WALs/jenkins-hbase4.apache.org,42149,1689646526006, archiveDir=hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/oldWALs, maxLogs=32 2023-07-18 02:15:26,887 INFO [RS:1;jenkins-hbase4:46217] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C46217%2C1689646526157, suffix=, logDir=hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/WALs/jenkins-hbase4.apache.org,46217,1689646526157, archiveDir=hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/oldWALs, maxLogs=32 2023-07-18 02:15:26,895 INFO [RS:2;jenkins-hbase4:32775] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C32775%2C1689646526307, suffix=, logDir=hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/WALs/jenkins-hbase4.apache.org,32775,1689646526307, archiveDir=hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/oldWALs, maxLogs=32 2023-07-18 02:15:26,916 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37759,DS-3f9faa3e-7a97-475a-a68f-1dd43adc8a7e,DISK] 2023-07-18 02:15:26,916 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39839,DS-1de6045e-347e-487b-a9d9-61a01cb59513,DISK] 2023-07-18 02:15:26,916 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35061,DS-c23070b9-4420-4118-9f01-dcf5c111c9ec,DISK] 2023-07-18 02:15:26,919 WARN [ReadOnlyZKClient-127.0.0.1:64106@0x127c9385] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-18 02:15:26,919 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,34701,1689646525826] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-18 02:15:26,924 INFO [RS-EventLoopGroup-13-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:60492, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-18 02:15:26,935 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=42149] ipc.CallRunner(144): callId: 0 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:60492 deadline: 1689646586924, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase4.apache.org,42149,1689646526006 2023-07-18 02:15:26,935 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39839,DS-1de6045e-347e-487b-a9d9-61a01cb59513,DISK] 2023-07-18 02:15:26,936 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35061,DS-c23070b9-4420-4118-9f01-dcf5c111c9ec,DISK] 2023-07-18 02:15:26,936 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39839,DS-1de6045e-347e-487b-a9d9-61a01cb59513,DISK] 2023-07-18 02:15:26,936 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37759,DS-3f9faa3e-7a97-475a-a68f-1dd43adc8a7e,DISK] 2023-07-18 02:15:26,937 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37759,DS-3f9faa3e-7a97-475a-a68f-1dd43adc8a7e,DISK] 2023-07-18 02:15:26,937 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35061,DS-c23070b9-4420-4118-9f01-dcf5c111c9ec,DISK] 2023-07-18 02:15:26,938 INFO [RS:1;jenkins-hbase4:46217] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/WALs/jenkins-hbase4.apache.org,46217,1689646526157/jenkins-hbase4.apache.org%2C46217%2C1689646526157.1689646526887 2023-07-18 02:15:26,943 INFO [RS:0;jenkins-hbase4:42149] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/WALs/jenkins-hbase4.apache.org,42149,1689646526006/jenkins-hbase4.apache.org%2C42149%2C1689646526006.1689646526887 2023-07-18 02:15:26,943 INFO [RS:2;jenkins-hbase4:32775] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/WALs/jenkins-hbase4.apache.org,32775,1689646526307/jenkins-hbase4.apache.org%2C32775%2C1689646526307.1689646526896 2023-07-18 02:15:26,946 DEBUG [RS:1;jenkins-hbase4:46217] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35061,DS-c23070b9-4420-4118-9f01-dcf5c111c9ec,DISK], DatanodeInfoWithStorage[127.0.0.1:39839,DS-1de6045e-347e-487b-a9d9-61a01cb59513,DISK], DatanodeInfoWithStorage[127.0.0.1:37759,DS-3f9faa3e-7a97-475a-a68f-1dd43adc8a7e,DISK]] 2023-07-18 02:15:26,946 DEBUG [RS:2;jenkins-hbase4:32775] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39839,DS-1de6045e-347e-487b-a9d9-61a01cb59513,DISK], DatanodeInfoWithStorage[127.0.0.1:35061,DS-c23070b9-4420-4118-9f01-dcf5c111c9ec,DISK], DatanodeInfoWithStorage[127.0.0.1:37759,DS-3f9faa3e-7a97-475a-a68f-1dd43adc8a7e,DISK]] 2023-07-18 02:15:26,946 DEBUG [RS:0;jenkins-hbase4:42149] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35061,DS-c23070b9-4420-4118-9f01-dcf5c111c9ec,DISK], DatanodeInfoWithStorage[127.0.0.1:39839,DS-1de6045e-347e-487b-a9d9-61a01cb59513,DISK], DatanodeInfoWithStorage[127.0.0.1:37759,DS-3f9faa3e-7a97-475a-a68f-1dd43adc8a7e,DISK]] 2023-07-18 02:15:27,027 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,42149,1689646526006 2023-07-18 02:15:27,029 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-18 02:15:27,031 INFO [RS-EventLoopGroup-13-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:60502, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-18 02:15:27,035 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-18 02:15:27,035 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 02:15:27,037 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C42149%2C1689646526006.meta, suffix=.meta, logDir=hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/WALs/jenkins-hbase4.apache.org,42149,1689646526006, archiveDir=hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/oldWALs, maxLogs=32 2023-07-18 02:15:27,051 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39839,DS-1de6045e-347e-487b-a9d9-61a01cb59513,DISK] 2023-07-18 02:15:27,052 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35061,DS-c23070b9-4420-4118-9f01-dcf5c111c9ec,DISK] 2023-07-18 02:15:27,052 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37759,DS-3f9faa3e-7a97-475a-a68f-1dd43adc8a7e,DISK] 2023-07-18 02:15:27,054 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/WALs/jenkins-hbase4.apache.org,42149,1689646526006/jenkins-hbase4.apache.org%2C42149%2C1689646526006.meta.1689646527037.meta 2023-07-18 02:15:27,056 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39839,DS-1de6045e-347e-487b-a9d9-61a01cb59513,DISK], DatanodeInfoWithStorage[127.0.0.1:35061,DS-c23070b9-4420-4118-9f01-dcf5c111c9ec,DISK], DatanodeInfoWithStorage[127.0.0.1:37759,DS-3f9faa3e-7a97-475a-a68f-1dd43adc8a7e,DISK]] 2023-07-18 02:15:27,057 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-18 02:15:27,057 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-18 02:15:27,057 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-18 02:15:27,057 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-18 02:15:27,057 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-18 02:15:27,057 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 02:15:27,057 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-18 02:15:27,057 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-18 02:15:27,058 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-18 02:15:27,060 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/data/hbase/meta/1588230740/info 2023-07-18 02:15:27,060 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/data/hbase/meta/1588230740/info 2023-07-18 02:15:27,060 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-18 02:15:27,060 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 02:15:27,061 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-18 02:15:27,061 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/data/hbase/meta/1588230740/rep_barrier 2023-07-18 02:15:27,061 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/data/hbase/meta/1588230740/rep_barrier 2023-07-18 02:15:27,062 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-18 02:15:27,062 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 02:15:27,062 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-18 02:15:27,063 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/data/hbase/meta/1588230740/table 2023-07-18 02:15:27,063 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/data/hbase/meta/1588230740/table 2023-07-18 02:15:27,063 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-18 02:15:27,064 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 02:15:27,064 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/data/hbase/meta/1588230740 2023-07-18 02:15:27,065 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/data/hbase/meta/1588230740 2023-07-18 02:15:27,067 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-18 02:15:27,068 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-18 02:15:27,069 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10286042880, jitterRate=-0.04203760623931885}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-18 02:15:27,069 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-18 02:15:27,070 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689646527027 2023-07-18 02:15:27,074 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-18 02:15:27,075 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-18 02:15:27,075 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,42149,1689646526006, state=OPEN 2023-07-18 02:15:27,076 DEBUG [Listener at localhost/42627-EventThread] zookeeper.ZKWatcher(600): master:34701-0x101763670720000, quorum=127.0.0.1:64106, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-18 02:15:27,077 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-18 02:15:27,078 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-18 02:15:27,078 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,42149,1689646526006 in 206 msec 2023-07-18 02:15:27,080 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-18 02:15:27,080 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 366 msec 2023-07-18 02:15:27,081 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 469 msec 2023-07-18 02:15:27,081 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689646527081, completionTime=-1 2023-07-18 02:15:27,082 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-18 02:15:27,082 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-18 02:15:27,086 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-18 02:15:27,086 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689646587086 2023-07-18 02:15:27,086 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689646647086 2023-07-18 02:15:27,086 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 4 msec 2023-07-18 02:15:27,093 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34701,1689646525826-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 02:15:27,093 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34701,1689646525826-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-18 02:15:27,093 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34701,1689646525826-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-18 02:15:27,093 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:34701, period=300000, unit=MILLISECONDS is enabled. 2023-07-18 02:15:27,093 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-18 02:15:27,093 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-18 02:15:27,093 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-18 02:15:27,094 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-18 02:15:27,095 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-18 02:15:27,096 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 02:15:27,097 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-18 02:15:27,098 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/.tmp/data/hbase/namespace/5d42ab326f55041590a03c94226111bd 2023-07-18 02:15:27,099 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/.tmp/data/hbase/namespace/5d42ab326f55041590a03c94226111bd empty. 2023-07-18 02:15:27,099 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/.tmp/data/hbase/namespace/5d42ab326f55041590a03c94226111bd 2023-07-18 02:15:27,099 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-18 02:15:27,190 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-07-18 02:15:27,239 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,34701,1689646525826] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 02:15:27,242 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,34701,1689646525826] procedure2.ProcedureExecutor(1029): Stored pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-18 02:15:27,259 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 02:15:27,260 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-18 02:15:27,267 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/.tmp/data/hbase/rsgroup/4ff67b2ac9c9087f40b9b252696553d5 2023-07-18 02:15:27,267 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/.tmp/data/hbase/rsgroup/4ff67b2ac9c9087f40b9b252696553d5 empty. 2023-07-18 02:15:27,268 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/.tmp/data/hbase/rsgroup/4ff67b2ac9c9087f40b9b252696553d5 2023-07-18 02:15:27,268 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-18 02:15:27,345 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-18 02:15:27,349 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 4ff67b2ac9c9087f40b9b252696553d5, NAME => 'hbase:rsgroup,,1689646527238.4ff67b2ac9c9087f40b9b252696553d5.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/.tmp 2023-07-18 02:15:27,376 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689646527238.4ff67b2ac9c9087f40b9b252696553d5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 02:15:27,376 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing 4ff67b2ac9c9087f40b9b252696553d5, disabling compactions & flushes 2023-07-18 02:15:27,377 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689646527238.4ff67b2ac9c9087f40b9b252696553d5. 2023-07-18 02:15:27,377 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689646527238.4ff67b2ac9c9087f40b9b252696553d5. 2023-07-18 02:15:27,377 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689646527238.4ff67b2ac9c9087f40b9b252696553d5. after waiting 0 ms 2023-07-18 02:15:27,377 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689646527238.4ff67b2ac9c9087f40b9b252696553d5. 2023-07-18 02:15:27,377 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689646527238.4ff67b2ac9c9087f40b9b252696553d5. 2023-07-18 02:15:27,377 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for 4ff67b2ac9c9087f40b9b252696553d5: 2023-07-18 02:15:27,379 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-18 02:15:27,380 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689646527238.4ff67b2ac9c9087f40b9b252696553d5.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689646527380"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689646527380"}]},"ts":"1689646527380"} 2023-07-18 02:15:27,383 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-18 02:15:27,384 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-18 02:15:27,384 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689646527384"}]},"ts":"1689646527384"} 2023-07-18 02:15:27,385 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-18 02:15:27,389 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 02:15:27,390 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 02:15:27,390 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 02:15:27,390 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 02:15:27,390 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 02:15:27,391 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=4ff67b2ac9c9087f40b9b252696553d5, ASSIGN}] 2023-07-18 02:15:27,392 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=6, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=4ff67b2ac9c9087f40b9b252696553d5, ASSIGN 2023-07-18 02:15:27,395 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=6, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=4ff67b2ac9c9087f40b9b252696553d5, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42149,1689646526006; forceNewPlan=false, retain=false 2023-07-18 02:15:27,517 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-18 02:15:27,519 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 5d42ab326f55041590a03c94226111bd, NAME => 'hbase:namespace,,1689646527093.5d42ab326f55041590a03c94226111bd.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/.tmp 2023-07-18 02:15:27,530 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689646527093.5d42ab326f55041590a03c94226111bd.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 02:15:27,530 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 5d42ab326f55041590a03c94226111bd, disabling compactions & flushes 2023-07-18 02:15:27,530 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689646527093.5d42ab326f55041590a03c94226111bd. 2023-07-18 02:15:27,530 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689646527093.5d42ab326f55041590a03c94226111bd. 2023-07-18 02:15:27,530 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689646527093.5d42ab326f55041590a03c94226111bd. after waiting 0 ms 2023-07-18 02:15:27,530 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689646527093.5d42ab326f55041590a03c94226111bd. 2023-07-18 02:15:27,530 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689646527093.5d42ab326f55041590a03c94226111bd. 2023-07-18 02:15:27,530 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 5d42ab326f55041590a03c94226111bd: 2023-07-18 02:15:27,533 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-18 02:15:27,534 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689646527093.5d42ab326f55041590a03c94226111bd.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689646527534"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689646527534"}]},"ts":"1689646527534"} 2023-07-18 02:15:27,535 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-18 02:15:27,536 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-18 02:15:27,536 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689646527536"}]},"ts":"1689646527536"} 2023-07-18 02:15:27,538 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-18 02:15:27,543 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 02:15:27,543 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 02:15:27,543 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 02:15:27,543 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 02:15:27,543 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 02:15:27,543 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=5d42ab326f55041590a03c94226111bd, ASSIGN}] 2023-07-18 02:15:27,545 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=5d42ab326f55041590a03c94226111bd, ASSIGN 2023-07-18 02:15:27,545 INFO [jenkins-hbase4:34701] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-18 02:15:27,547 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=4ff67b2ac9c9087f40b9b252696553d5, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42149,1689646526006 2023-07-18 02:15:27,547 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689646527238.4ff67b2ac9c9087f40b9b252696553d5.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689646527546"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689646527546"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689646527546"}]},"ts":"1689646527546"} 2023-07-18 02:15:27,547 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=5d42ab326f55041590a03c94226111bd, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,32775,1689646526307; forceNewPlan=false, retain=false 2023-07-18 02:15:27,552 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=6, state=RUNNABLE; OpenRegionProcedure 4ff67b2ac9c9087f40b9b252696553d5, server=jenkins-hbase4.apache.org,42149,1689646526006}] 2023-07-18 02:15:27,698 INFO [jenkins-hbase4:34701] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-18 02:15:27,699 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=5d42ab326f55041590a03c94226111bd, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,32775,1689646526307 2023-07-18 02:15:27,700 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689646527093.5d42ab326f55041590a03c94226111bd.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689646527699"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689646527699"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689646527699"}]},"ts":"1689646527699"} 2023-07-18 02:15:27,701 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=7, state=RUNNABLE; OpenRegionProcedure 5d42ab326f55041590a03c94226111bd, server=jenkins-hbase4.apache.org,32775,1689646526307}] 2023-07-18 02:15:27,708 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689646527238.4ff67b2ac9c9087f40b9b252696553d5. 2023-07-18 02:15:27,708 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 4ff67b2ac9c9087f40b9b252696553d5, NAME => 'hbase:rsgroup,,1689646527238.4ff67b2ac9c9087f40b9b252696553d5.', STARTKEY => '', ENDKEY => ''} 2023-07-18 02:15:27,709 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-18 02:15:27,709 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689646527238.4ff67b2ac9c9087f40b9b252696553d5. service=MultiRowMutationService 2023-07-18 02:15:27,709 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-18 02:15:27,709 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 4ff67b2ac9c9087f40b9b252696553d5 2023-07-18 02:15:27,709 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689646527238.4ff67b2ac9c9087f40b9b252696553d5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 02:15:27,709 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 4ff67b2ac9c9087f40b9b252696553d5 2023-07-18 02:15:27,709 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 4ff67b2ac9c9087f40b9b252696553d5 2023-07-18 02:15:27,711 INFO [StoreOpener-4ff67b2ac9c9087f40b9b252696553d5-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 4ff67b2ac9c9087f40b9b252696553d5 2023-07-18 02:15:27,712 DEBUG [StoreOpener-4ff67b2ac9c9087f40b9b252696553d5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/data/hbase/rsgroup/4ff67b2ac9c9087f40b9b252696553d5/m 2023-07-18 02:15:27,712 DEBUG [StoreOpener-4ff67b2ac9c9087f40b9b252696553d5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/data/hbase/rsgroup/4ff67b2ac9c9087f40b9b252696553d5/m 2023-07-18 02:15:27,713 INFO [StoreOpener-4ff67b2ac9c9087f40b9b252696553d5-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 4ff67b2ac9c9087f40b9b252696553d5 columnFamilyName m 2023-07-18 02:15:27,713 INFO [StoreOpener-4ff67b2ac9c9087f40b9b252696553d5-1] regionserver.HStore(310): Store=4ff67b2ac9c9087f40b9b252696553d5/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 02:15:27,714 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/data/hbase/rsgroup/4ff67b2ac9c9087f40b9b252696553d5 2023-07-18 02:15:27,715 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/data/hbase/rsgroup/4ff67b2ac9c9087f40b9b252696553d5 2023-07-18 02:15:27,718 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 4ff67b2ac9c9087f40b9b252696553d5 2023-07-18 02:15:27,719 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-18 02:15:27,719 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver Metrics about HBase MasterObservers 2023-07-18 02:15:27,719 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-18 02:15:27,719 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-07-18 02:15:27,719 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-18 02:15:27,720 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint Metrics about HBase MasterObservers 2023-07-18 02:15:27,721 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/data/hbase/rsgroup/4ff67b2ac9c9087f40b9b252696553d5/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 02:15:27,721 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 4ff67b2ac9c9087f40b9b252696553d5; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@1c53167a, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 02:15:27,721 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 4ff67b2ac9c9087f40b9b252696553d5: 2023-07-18 02:15:27,722 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689646527238.4ff67b2ac9c9087f40b9b252696553d5., pid=8, masterSystemTime=1689646527704 2023-07-18 02:15:27,725 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689646527238.4ff67b2ac9c9087f40b9b252696553d5. 2023-07-18 02:15:27,725 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689646527238.4ff67b2ac9c9087f40b9b252696553d5. 2023-07-18 02:15:27,725 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=4ff67b2ac9c9087f40b9b252696553d5, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42149,1689646526006 2023-07-18 02:15:27,725 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689646527238.4ff67b2ac9c9087f40b9b252696553d5.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689646527725"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689646527725"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689646527725"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689646527725"}]},"ts":"1689646527725"} 2023-07-18 02:15:27,728 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=6 2023-07-18 02:15:27,728 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=6, state=SUCCESS; OpenRegionProcedure 4ff67b2ac9c9087f40b9b252696553d5, server=jenkins-hbase4.apache.org,42149,1689646526006 in 175 msec 2023-07-18 02:15:27,731 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-07-18 02:15:27,731 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=4ff67b2ac9c9087f40b9b252696553d5, ASSIGN in 337 msec 2023-07-18 02:15:27,732 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-18 02:15:27,732 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689646527732"}]},"ts":"1689646527732"} 2023-07-18 02:15:27,735 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-18 02:15:27,737 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-18 02:15:27,739 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=5, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 498 msec 2023-07-18 02:15:27,755 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,34701,1689646525826] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-18 02:15:27,755 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,34701,1689646525826] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-18 02:15:27,765 DEBUG [Listener at localhost/42627-EventThread] zookeeper.ZKWatcher(600): master:34701-0x101763670720000, quorum=127.0.0.1:64106, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 02:15:27,765 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,34701,1689646525826] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:27,768 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,34701,1689646525826] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-18 02:15:27,769 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,34701,1689646525826] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-18 02:15:27,855 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,32775,1689646526307 2023-07-18 02:15:27,855 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-18 02:15:27,857 INFO [RS-EventLoopGroup-15-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:37602, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-18 02:15:27,860 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689646527093.5d42ab326f55041590a03c94226111bd. 2023-07-18 02:15:27,861 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 5d42ab326f55041590a03c94226111bd, NAME => 'hbase:namespace,,1689646527093.5d42ab326f55041590a03c94226111bd.', STARTKEY => '', ENDKEY => ''} 2023-07-18 02:15:27,861 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 5d42ab326f55041590a03c94226111bd 2023-07-18 02:15:27,861 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689646527093.5d42ab326f55041590a03c94226111bd.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 02:15:27,861 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 5d42ab326f55041590a03c94226111bd 2023-07-18 02:15:27,861 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 5d42ab326f55041590a03c94226111bd 2023-07-18 02:15:27,863 INFO [StoreOpener-5d42ab326f55041590a03c94226111bd-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 5d42ab326f55041590a03c94226111bd 2023-07-18 02:15:27,865 DEBUG [StoreOpener-5d42ab326f55041590a03c94226111bd-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/data/hbase/namespace/5d42ab326f55041590a03c94226111bd/info 2023-07-18 02:15:27,865 DEBUG [StoreOpener-5d42ab326f55041590a03c94226111bd-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/data/hbase/namespace/5d42ab326f55041590a03c94226111bd/info 2023-07-18 02:15:27,865 INFO [StoreOpener-5d42ab326f55041590a03c94226111bd-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 5d42ab326f55041590a03c94226111bd columnFamilyName info 2023-07-18 02:15:27,866 INFO [StoreOpener-5d42ab326f55041590a03c94226111bd-1] regionserver.HStore(310): Store=5d42ab326f55041590a03c94226111bd/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 02:15:27,867 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/data/hbase/namespace/5d42ab326f55041590a03c94226111bd 2023-07-18 02:15:27,868 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/data/hbase/namespace/5d42ab326f55041590a03c94226111bd 2023-07-18 02:15:27,870 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 5d42ab326f55041590a03c94226111bd 2023-07-18 02:15:27,873 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/data/hbase/namespace/5d42ab326f55041590a03c94226111bd/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 02:15:27,874 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 5d42ab326f55041590a03c94226111bd; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10280718560, jitterRate=-0.042533472180366516}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 02:15:27,874 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 5d42ab326f55041590a03c94226111bd: 2023-07-18 02:15:27,875 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689646527093.5d42ab326f55041590a03c94226111bd., pid=9, masterSystemTime=1689646527855 2023-07-18 02:15:27,878 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689646527093.5d42ab326f55041590a03c94226111bd. 2023-07-18 02:15:27,879 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689646527093.5d42ab326f55041590a03c94226111bd. 2023-07-18 02:15:27,879 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=5d42ab326f55041590a03c94226111bd, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,32775,1689646526307 2023-07-18 02:15:27,879 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689646527093.5d42ab326f55041590a03c94226111bd.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689646527879"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689646527879"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689646527879"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689646527879"}]},"ts":"1689646527879"} 2023-07-18 02:15:27,882 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=7 2023-07-18 02:15:27,882 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=7, state=SUCCESS; OpenRegionProcedure 5d42ab326f55041590a03c94226111bd, server=jenkins-hbase4.apache.org,32775,1689646526307 in 179 msec 2023-07-18 02:15:27,885 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=4 2023-07-18 02:15:27,885 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=5d42ab326f55041590a03c94226111bd, ASSIGN in 339 msec 2023-07-18 02:15:27,886 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-18 02:15:27,886 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689646527886"}]},"ts":"1689646527886"} 2023-07-18 02:15:27,891 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-18 02:15:27,893 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-18 02:15:27,894 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 800 msec 2023-07-18 02:15:27,895 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34701-0x101763670720000, quorum=127.0.0.1:64106, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-18 02:15:27,896 DEBUG [Listener at localhost/42627-EventThread] zookeeper.ZKWatcher(600): master:34701-0x101763670720000, quorum=127.0.0.1:64106, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-18 02:15:27,897 DEBUG [Listener at localhost/42627-EventThread] zookeeper.ZKWatcher(600): master:34701-0x101763670720000, quorum=127.0.0.1:64106, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 02:15:27,901 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-18 02:15:27,902 INFO [RS-EventLoopGroup-15-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:37618, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-18 02:15:27,907 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-18 02:15:27,933 DEBUG [Listener at localhost/42627-EventThread] zookeeper.ZKWatcher(600): master:34701-0x101763670720000, quorum=127.0.0.1:64106, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-18 02:15:27,937 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 31 msec 2023-07-18 02:15:27,939 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-18 02:15:27,952 DEBUG [Listener at localhost/42627-EventThread] zookeeper.ZKWatcher(600): master:34701-0x101763670720000, quorum=127.0.0.1:64106, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-18 02:15:27,955 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 15 msec 2023-07-18 02:15:27,972 DEBUG [Listener at localhost/42627-EventThread] zookeeper.ZKWatcher(600): master:34701-0x101763670720000, quorum=127.0.0.1:64106, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-18 02:15:27,975 DEBUG [Listener at localhost/42627-EventThread] zookeeper.ZKWatcher(600): master:34701-0x101763670720000, quorum=127.0.0.1:64106, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-18 02:15:27,975 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.506sec 2023-07-18 02:15:27,975 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-18 02:15:27,975 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-18 02:15:27,975 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-18 02:15:27,975 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34701,1689646525826-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-18 02:15:27,975 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34701,1689646525826-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-18 02:15:27,987 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-18 02:15:28,060 DEBUG [Listener at localhost/42627] zookeeper.ReadOnlyZKClient(139): Connect 0x4875d50c to 127.0.0.1:64106 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 02:15:28,068 DEBUG [Listener at localhost/42627] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@357edf8c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 02:15:28,070 DEBUG [hconnection-0x5934234d-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-18 02:15:28,071 INFO [RS-EventLoopGroup-13-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:60516, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-18 02:15:28,073 INFO [Listener at localhost/42627] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,34701,1689646525826 2023-07-18 02:15:28,073 INFO [Listener at localhost/42627] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 02:15:28,076 DEBUG [Listener at localhost/42627] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-18 02:15:28,078 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:47884, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-18 02:15:28,081 DEBUG [Listener at localhost/42627-EventThread] zookeeper.ZKWatcher(600): master:34701-0x101763670720000, quorum=127.0.0.1:64106, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-18 02:15:28,081 DEBUG [Listener at localhost/42627-EventThread] zookeeper.ZKWatcher(600): master:34701-0x101763670720000, quorum=127.0.0.1:64106, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 02:15:28,082 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-18 02:15:28,082 DEBUG [Listener at localhost/42627] zookeeper.ReadOnlyZKClient(139): Connect 0x575b56c6 to 127.0.0.1:64106 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 02:15:28,094 DEBUG [Listener at localhost/42627] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2f0d392b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 02:15:28,095 INFO [Listener at localhost/42627] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:64106 2023-07-18 02:15:28,103 DEBUG [Listener at localhost/42627-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:64106, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 02:15:28,104 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x10176367072000a connected 2023-07-18 02:15:28,105 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:28,107 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:28,110 INFO [Listener at localhost/42627] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-18 02:15:28,125 INFO [Listener at localhost/42627] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-18 02:15:28,126 INFO [Listener at localhost/42627] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 02:15:28,126 INFO [Listener at localhost/42627] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-18 02:15:28,126 INFO [Listener at localhost/42627] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-18 02:15:28,126 INFO [Listener at localhost/42627] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 02:15:28,126 INFO [Listener at localhost/42627] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-18 02:15:28,126 INFO [Listener at localhost/42627] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-18 02:15:28,127 INFO [Listener at localhost/42627] ipc.NettyRpcServer(120): Bind to /172.31.14.131:42297 2023-07-18 02:15:28,127 INFO [Listener at localhost/42627] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-18 02:15:28,129 DEBUG [Listener at localhost/42627] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-18 02:15:28,130 INFO [Listener at localhost/42627] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 02:15:28,131 INFO [Listener at localhost/42627] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 02:15:28,132 INFO [Listener at localhost/42627] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:42297 connecting to ZooKeeper ensemble=127.0.0.1:64106 2023-07-18 02:15:28,136 DEBUG [Listener at localhost/42627-EventThread] zookeeper.ZKWatcher(600): regionserver:422970x0, quorum=127.0.0.1:64106, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 02:15:28,139 DEBUG [Listener at localhost/42627] zookeeper.ZKUtil(162): regionserver:422970x0, quorum=127.0.0.1:64106, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-18 02:15:28,139 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:42297-0x10176367072000b connected 2023-07-18 02:15:28,140 DEBUG [Listener at localhost/42627] zookeeper.ZKUtil(162): regionserver:42297-0x10176367072000b, quorum=127.0.0.1:64106, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-18 02:15:28,141 DEBUG [Listener at localhost/42627] zookeeper.ZKUtil(164): regionserver:42297-0x10176367072000b, quorum=127.0.0.1:64106, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-18 02:15:28,158 DEBUG [Listener at localhost/42627] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=42297 2023-07-18 02:15:28,159 DEBUG [Listener at localhost/42627] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=42297 2023-07-18 02:15:28,161 DEBUG [Listener at localhost/42627] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=42297 2023-07-18 02:15:28,161 DEBUG [Listener at localhost/42627] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=42297 2023-07-18 02:15:28,161 DEBUG [Listener at localhost/42627] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=42297 2023-07-18 02:15:28,163 INFO [Listener at localhost/42627] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-18 02:15:28,164 INFO [Listener at localhost/42627] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-18 02:15:28,164 INFO [Listener at localhost/42627] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-18 02:15:28,164 INFO [Listener at localhost/42627] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-18 02:15:28,164 INFO [Listener at localhost/42627] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-18 02:15:28,165 INFO [Listener at localhost/42627] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-18 02:15:28,165 INFO [Listener at localhost/42627] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-18 02:15:28,165 INFO [Listener at localhost/42627] http.HttpServer(1146): Jetty bound to port 39647 2023-07-18 02:15:28,166 INFO [Listener at localhost/42627] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 02:15:28,169 INFO [Listener at localhost/42627] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 02:15:28,170 INFO [Listener at localhost/42627] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7d3c5a39{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/527ac157-3429-81c3-f99e-cf221451c37d/hadoop.log.dir/,AVAILABLE} 2023-07-18 02:15:28,170 INFO [Listener at localhost/42627] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 02:15:28,170 INFO [Listener at localhost/42627] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2d37a2b0{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-18 02:15:28,291 INFO [Listener at localhost/42627] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-18 02:15:28,291 INFO [Listener at localhost/42627] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-18 02:15:28,291 INFO [Listener at localhost/42627] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-18 02:15:28,292 INFO [Listener at localhost/42627] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-18 02:15:28,292 INFO [Listener at localhost/42627] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 02:15:28,293 INFO [Listener at localhost/42627] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@3e8cf2fc{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/527ac157-3429-81c3-f99e-cf221451c37d/java.io.tmpdir/jetty-0_0_0_0-39647-hbase-server-2_4_18-SNAPSHOT_jar-_-any-2313348795706360161/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-18 02:15:28,297 INFO [Listener at localhost/42627] server.AbstractConnector(333): Started ServerConnector@134f4aae{HTTP/1.1, (http/1.1)}{0.0.0.0:39647} 2023-07-18 02:15:28,298 INFO [Listener at localhost/42627] server.Server(415): Started @46509ms 2023-07-18 02:15:28,301 INFO [RS:3;jenkins-hbase4:42297] regionserver.HRegionServer(951): ClusterId : 19d49dd7-0c61-44f8-81d7-41a3754fa79f 2023-07-18 02:15:28,301 DEBUG [RS:3;jenkins-hbase4:42297] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-18 02:15:28,302 DEBUG [RS:3;jenkins-hbase4:42297] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-18 02:15:28,303 DEBUG [RS:3;jenkins-hbase4:42297] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-18 02:15:28,305 DEBUG [RS:3;jenkins-hbase4:42297] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-18 02:15:28,308 DEBUG [RS:3;jenkins-hbase4:42297] zookeeper.ReadOnlyZKClient(139): Connect 0x003ea3a7 to 127.0.0.1:64106 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 02:15:28,314 DEBUG [RS:3;jenkins-hbase4:42297] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@ef95dd0, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 02:15:28,315 DEBUG [RS:3;jenkins-hbase4:42297] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@ac6c8d7, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-18 02:15:28,324 DEBUG [RS:3;jenkins-hbase4:42297] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase4:42297 2023-07-18 02:15:28,324 INFO [RS:3;jenkins-hbase4:42297] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-18 02:15:28,324 INFO [RS:3;jenkins-hbase4:42297] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-18 02:15:28,324 DEBUG [RS:3;jenkins-hbase4:42297] regionserver.HRegionServer(1022): About to register with Master. 2023-07-18 02:15:28,325 INFO [RS:3;jenkins-hbase4:42297] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,34701,1689646525826 with isa=jenkins-hbase4.apache.org/172.31.14.131:42297, startcode=1689646528125 2023-07-18 02:15:28,325 DEBUG [RS:3;jenkins-hbase4:42297] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-18 02:15:28,328 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:48347, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.10 (auth:SIMPLE), service=RegionServerStatusService 2023-07-18 02:15:28,328 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34701] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,42297,1689646528125 2023-07-18 02:15:28,328 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34701,1689646525826] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-18 02:15:28,329 DEBUG [RS:3;jenkins-hbase4:42297] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940 2023-07-18 02:15:28,329 DEBUG [RS:3;jenkins-hbase4:42297] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:41331 2023-07-18 02:15:28,329 DEBUG [RS:3;jenkins-hbase4:42297] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=36071 2023-07-18 02:15:28,333 DEBUG [Listener at localhost/42627-EventThread] zookeeper.ZKWatcher(600): regionserver:42149-0x101763670720001, quorum=127.0.0.1:64106, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 02:15:28,333 DEBUG [Listener at localhost/42627-EventThread] zookeeper.ZKWatcher(600): regionserver:32775-0x101763670720003, quorum=127.0.0.1:64106, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 02:15:28,334 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34701,1689646525826] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:28,333 DEBUG [Listener at localhost/42627-EventThread] zookeeper.ZKWatcher(600): master:34701-0x101763670720000, quorum=127.0.0.1:64106, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 02:15:28,333 DEBUG [Listener at localhost/42627-EventThread] zookeeper.ZKWatcher(600): regionserver:46217-0x101763670720002, quorum=127.0.0.1:64106, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 02:15:28,334 DEBUG [RS:3;jenkins-hbase4:42297] zookeeper.ZKUtil(162): regionserver:42297-0x10176367072000b, quorum=127.0.0.1:64106, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42297,1689646528125 2023-07-18 02:15:28,334 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34701,1689646525826] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-18 02:15:28,334 WARN [RS:3;jenkins-hbase4:42297] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-18 02:15:28,334 INFO [RS:3;jenkins-hbase4:42297] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 02:15:28,334 DEBUG [RS:3;jenkins-hbase4:42297] regionserver.HRegionServer(1948): logDir=hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/WALs/jenkins-hbase4.apache.org,42297,1689646528125 2023-07-18 02:15:28,335 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,42297,1689646528125] 2023-07-18 02:15:28,335 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:32775-0x101763670720003, quorum=127.0.0.1:64106, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,32775,1689646526307 2023-07-18 02:15:28,335 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42149-0x101763670720001, quorum=127.0.0.1:64106, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,32775,1689646526307 2023-07-18 02:15:28,337 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46217-0x101763670720002, quorum=127.0.0.1:64106, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,32775,1689646526307 2023-07-18 02:15:28,337 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42149-0x101763670720001, quorum=127.0.0.1:64106, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42297,1689646528125 2023-07-18 02:15:28,337 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:32775-0x101763670720003, quorum=127.0.0.1:64106, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42297,1689646528125 2023-07-18 02:15:28,337 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34701,1689646525826] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-18 02:15:28,337 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46217-0x101763670720002, quorum=127.0.0.1:64106, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42297,1689646528125 2023-07-18 02:15:28,337 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42149-0x101763670720001, quorum=127.0.0.1:64106, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42149,1689646526006 2023-07-18 02:15:28,337 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:32775-0x101763670720003, quorum=127.0.0.1:64106, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42149,1689646526006 2023-07-18 02:15:28,337 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46217-0x101763670720002, quorum=127.0.0.1:64106, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42149,1689646526006 2023-07-18 02:15:28,338 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42149-0x101763670720001, quorum=127.0.0.1:64106, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46217,1689646526157 2023-07-18 02:15:28,338 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:32775-0x101763670720003, quorum=127.0.0.1:64106, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46217,1689646526157 2023-07-18 02:15:28,339 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46217-0x101763670720002, quorum=127.0.0.1:64106, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46217,1689646526157 2023-07-18 02:15:28,340 DEBUG [RS:3;jenkins-hbase4:42297] zookeeper.ZKUtil(162): regionserver:42297-0x10176367072000b, quorum=127.0.0.1:64106, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,32775,1689646526307 2023-07-18 02:15:28,340 DEBUG [RS:3;jenkins-hbase4:42297] zookeeper.ZKUtil(162): regionserver:42297-0x10176367072000b, quorum=127.0.0.1:64106, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42297,1689646528125 2023-07-18 02:15:28,340 DEBUG [RS:3;jenkins-hbase4:42297] zookeeper.ZKUtil(162): regionserver:42297-0x10176367072000b, quorum=127.0.0.1:64106, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42149,1689646526006 2023-07-18 02:15:28,340 DEBUG [RS:3;jenkins-hbase4:42297] zookeeper.ZKUtil(162): regionserver:42297-0x10176367072000b, quorum=127.0.0.1:64106, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46217,1689646526157 2023-07-18 02:15:28,341 DEBUG [RS:3;jenkins-hbase4:42297] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-18 02:15:28,341 INFO [RS:3;jenkins-hbase4:42297] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-18 02:15:28,342 INFO [RS:3;jenkins-hbase4:42297] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-18 02:15:28,343 INFO [RS:3;jenkins-hbase4:42297] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-18 02:15:28,343 INFO [RS:3;jenkins-hbase4:42297] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 02:15:28,346 INFO [RS:3;jenkins-hbase4:42297] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-18 02:15:28,348 INFO [RS:3;jenkins-hbase4:42297] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-18 02:15:28,349 DEBUG [RS:3;jenkins-hbase4:42297] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:15:28,350 DEBUG [RS:3;jenkins-hbase4:42297] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:15:28,350 DEBUG [RS:3;jenkins-hbase4:42297] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:15:28,350 DEBUG [RS:3;jenkins-hbase4:42297] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:15:28,350 DEBUG [RS:3;jenkins-hbase4:42297] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:15:28,350 DEBUG [RS:3;jenkins-hbase4:42297] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-18 02:15:28,350 DEBUG [RS:3;jenkins-hbase4:42297] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:15:28,350 DEBUG [RS:3;jenkins-hbase4:42297] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:15:28,350 DEBUG [RS:3;jenkins-hbase4:42297] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:15:28,350 DEBUG [RS:3;jenkins-hbase4:42297] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 02:15:28,351 INFO [RS:3;jenkins-hbase4:42297] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 02:15:28,351 INFO [RS:3;jenkins-hbase4:42297] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 02:15:28,351 INFO [RS:3;jenkins-hbase4:42297] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-18 02:15:28,366 INFO [RS:3;jenkins-hbase4:42297] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-18 02:15:28,366 INFO [RS:3;jenkins-hbase4:42297] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,42297,1689646528125-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 02:15:28,382 INFO [RS:3;jenkins-hbase4:42297] regionserver.Replication(203): jenkins-hbase4.apache.org,42297,1689646528125 started 2023-07-18 02:15:28,382 INFO [RS:3;jenkins-hbase4:42297] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,42297,1689646528125, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:42297, sessionid=0x10176367072000b 2023-07-18 02:15:28,382 DEBUG [RS:3;jenkins-hbase4:42297] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-18 02:15:28,382 DEBUG [RS:3;jenkins-hbase4:42297] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,42297,1689646528125 2023-07-18 02:15:28,382 DEBUG [RS:3;jenkins-hbase4:42297] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,42297,1689646528125' 2023-07-18 02:15:28,382 DEBUG [RS:3;jenkins-hbase4:42297] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-18 02:15:28,383 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 02:15:28,384 DEBUG [RS:3;jenkins-hbase4:42297] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-18 02:15:28,384 DEBUG [RS:3;jenkins-hbase4:42297] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-18 02:15:28,384 DEBUG [RS:3;jenkins-hbase4:42297] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-18 02:15:28,384 DEBUG [RS:3;jenkins-hbase4:42297] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,42297,1689646528125 2023-07-18 02:15:28,384 DEBUG [RS:3;jenkins-hbase4:42297] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,42297,1689646528125' 2023-07-18 02:15:28,384 DEBUG [RS:3;jenkins-hbase4:42297] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-18 02:15:28,385 DEBUG [RS:3;jenkins-hbase4:42297] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-18 02:15:28,385 DEBUG [RS:3;jenkins-hbase4:42297] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-18 02:15:28,386 INFO [RS:3;jenkins-hbase4:42297] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-18 02:15:28,386 INFO [RS:3;jenkins-hbase4:42297] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-18 02:15:28,386 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:28,386 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 02:15:28,389 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 02:15:28,390 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 02:15:28,392 DEBUG [hconnection-0x552ab686-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-18 02:15:28,396 INFO [RS-EventLoopGroup-13-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:60522, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-18 02:15:28,403 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:28,403 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:28,406 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34701] to rsgroup master 2023-07-18 02:15:28,406 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34701 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 02:15:28,406 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] ipc.CallRunner(144): callId: 20 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:47884 deadline: 1689647728406, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34701 is either offline or it does not exist. 2023-07-18 02:15:28,407 WARN [Listener at localhost/42627] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34701 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34701 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 02:15:28,408 INFO [Listener at localhost/42627] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 02:15:28,409 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:28,409 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:28,410 INFO [Listener at localhost/42627] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:32775, jenkins-hbase4.apache.org:42149, jenkins-hbase4.apache.org:42297, jenkins-hbase4.apache.org:46217], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 02:15:28,410 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 02:15:28,411 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 02:15:28,457 INFO [Listener at localhost/42627] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRSGroupListDoesNotContainFailedTableCreation Thread=554 (was 523) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor@66ba58da java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor.run(FSNamesystem.java:3842) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor@22bbfe23 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor.run(PendingReplicationBlocks.java:244) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:0;jenkins-hbase4:42149 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-548-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x5934234d-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=34701 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 1 on default port 34679 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RS-EventLoopGroup-13-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp976075850-2347 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46217 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp912985556-2622 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-11 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1497269373_17 at /127.0.0.1:45580 [Receiving block BP-681486909-172.31.14.131-1689646525078:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-567-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=42297 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:64106@0x575b56c6-SendThread(127.0.0.1:64106) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:64106@0x4875d50c-SendThread(127.0.0.1:64106) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/527ac157-3429-81c3-f99e-cf221451c37d/cluster_80d12f21-14ad-01d3-6749-7b372d00c374/dfs/data/data1) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: IPC Client (699281955) connection to localhost/127.0.0.1:41331 from jenkins.hfs.10 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: qtp774357224-2312-acceptor-0@4b7c6d8e-ServerConnector@6d04da20{HTTP/1.1, (http/1.1)}{0.0.0.0:39899} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:42297-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1726500102_17 at /127.0.0.1:32966 [Receiving block BP-681486909-172.31.14.131-1689646525078:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:64106@0x127c9385-SendThread(127.0.0.1:64106) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:64106@0x4875d50c-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1726500102_17 at /127.0.0.1:34508 [Receiving block BP-681486909-172.31.14.131-1689646525078:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@5daf6e09 sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp679398534-2356-acceptor-0@7f7316bc-ServerConnector@26b927f9{HTTP/1.1, (http/1.1)}{0.0.0.0:34909} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp912985556-2618 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/2071299855.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/42627-SendThread(127.0.0.1:64106) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp679398534-2353 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/2071299855.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@dd43c78[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-552-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:64106@0x561f31ec-SendThread(127.0.0.1:64106) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: Listener at localhost/42627 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-24 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: Listener at localhost/42627-SendThread(127.0.0.1:64106) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: IPC Server handler 1 on default port 33349 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/527ac157-3429-81c3-f99e-cf221451c37d/cluster_80d12f21-14ad-01d3-6749-7b372d00c374/dfs/data/data1/current/BP-681486909-172.31.14.131-1689646525078 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34701 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Timer-30 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689646526638 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.PriorityBlockingQueue.take(PriorityBlockingQueue.java:549) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$2.run(HFileCleaner.java:251) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=46217 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x34d303dd-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 34679 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: IPC Server handler 4 on default port 41331 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: Timer-34 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp912985556-2623 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=46217 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 0 on default port 34679 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: Session-HouseKeeper-5af53e15-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/527ac157-3429-81c3-f99e-cf221451c37d/cluster_80d12f21-14ad-01d3-6749-7b372d00c374/dfs/data/data3) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=32775 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/527ac157-3429-81c3-f99e-cf221451c37d/cluster_80d12f21-14ad-01d3-6749-7b372d00c374/dfs/data/data2/current/BP-681486909-172.31.14.131-1689646525078 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940-prefix:jenkins-hbase4.apache.org,42149,1689646526006.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp353748819-2256 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (699281955) connection to localhost/127.0.0.1:45369 from jenkins.hfs.4 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: 267518829@qtp-524088327-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=42149 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@3bdb6fe java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-25 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=42297 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 0 on default port 42627 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: PacketResponder: BP-681486909-172.31.14.131-1689646525078:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1497269373_17 at /127.0.0.1:34558 [Receiving block BP-681486909-172.31.14.131-1689646525078:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber@105e13de java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber.run(FSNamesystem.java:3975) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 41331 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: Listener at localhost/42627-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp774357224-2318 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.7@localhost:41331 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp912985556-2625 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_128455721_17 at /127.0.0.1:34540 [Receiving block BP-681486909-172.31.14.131-1689646525078:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-3ff34e7b-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp912985556-2621 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:0;jenkins-hbase4:42149-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 749260906@qtp-1334008484-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=42149 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller@1706e5e4 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller.run(FSNamesystem.java:3884) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_795273726_17 at /127.0.0.1:33010 [Receiving block BP-681486909-172.31.14.131-1689646525078:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-681486909-172.31.14.131-1689646525078:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=42297 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: BP-681486909-172.31.14.131-1689646525078 heartbeating to localhost/127.0.0.1:41331 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ProcessThread(sid:0 cport:64106): sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.server.PrepRequestProcessor.run(PrepRequestProcessor.java:134) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/527ac157-3429-81c3-f99e-cf221451c37d/cluster_80d12f21-14ad-01d3-6749-7b372d00c374/dfs/data/data4/current/BP-681486909-172.31.14.131-1689646525078 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1497269373_17 at /127.0.0.1:34484 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: M:0;jenkins-hbase4:34701 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.master.HMaster.waitForMasterActive(HMaster.java:634) org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:957) org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:904) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1006) org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:541) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 33349 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/527ac157-3429-81c3-f99e-cf221451c37d/cluster_80d12f21-14ad-01d3-6749-7b372d00c374/dfs/data/data4) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689646526631 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$1.run(HFileCleaner.java:236) Potentially hanging thread: qtp976075850-2341 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/2071299855.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34701 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:64106@0x003ea3a7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/602809530.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42297 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=32775 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=32775 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS:1;jenkins-hbase4:46217 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_128455721_17 at /127.0.0.1:45566 [Receiving block BP-681486909-172.31.14.131-1689646525078:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:53987@0x34a7cc7e sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/602809530.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 42627 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: Listener at localhost/42627-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: PacketResponder: BP-681486909-172.31.14.131-1689646525078:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1916015537-2286 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:42297Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 3 on default port 41331 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: Timer-29 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp353748819-2250 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/2071299855.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:64106@0x561f31ec sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/602809530.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/527ac157-3429-81c3-f99e-cf221451c37d/cluster_80d12f21-14ad-01d3-6749-7b372d00c374/dfs/data/data6) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: 1102316695@qtp-1334008484-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:35533 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: qtp679398534-2354 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/2071299855.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.5@localhost:45369 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x552ab686-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/42627.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: Listener at localhost/42081-SendThread(127.0.0.1:53987) java.lang.Thread.sleep(Native Method) org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1072) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1139) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,43727,1689646519894 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: Session-HouseKeeper-508d202c-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer for 'DataNode' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: PacketResponder: BP-681486909-172.31.14.131-1689646525078:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:64106@0x3a22d183-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: Listener at localhost/42627-SendThread(127.0.0.1:64106) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: PacketResponder: BP-681486909-172.31.14.131-1689646525078:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp912985556-2620 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:64106@0x575b56c6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/602809530.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:64106@0x4875d50c sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/602809530.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=34701 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Client (699281955) connection to localhost/127.0.0.1:45369 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: qtp1916015537-2285 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:64106@0x4edb461d-SendThread(127.0.0.1:64106) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RS-EventLoopGroup-11-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=34701 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: jenkins-hbase4:46217Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 734604950@qtp-524088327-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38923 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: 1790959576@qtp-924848190-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45561 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: CacheReplicationMonitor(847217425) sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163) org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor.run(CacheReplicationMonitor.java:181) Potentially hanging thread: LeaseRenewer:jenkins.hfs.8@localhost:41331 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:64106@0x561f31ec-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: Timer-35 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: NIOServerCxnFactory.AcceptThread:localhost/127.0.0.1:64106 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.zookeeper.server.NIOServerCnxnFactory$AcceptThread.select(NIOServerCnxnFactory.java:229) org.apache.zookeeper.server.NIOServerCnxnFactory$AcceptThread.run(NIOServerCnxnFactory.java:205) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@35814f34 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-681486909-172.31.14.131-1689646525078:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=46217 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: pool-571-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp774357224-2311 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/2071299855.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:2;jenkins-hbase4:32775 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/42081-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-14 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:64106@0x4edb461d sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/602809530.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-15-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-12-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=42297 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=34701 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_795273726_17 at /127.0.0.1:32986 [Receiving block BP-681486909-172.31.14.131-1689646525078:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:64106@0x4edb461d-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: IPC Server handler 3 on default port 34679 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:64106@0x575b56c6-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32775 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-8-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-14-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 42627 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp774357224-2313 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-681486909-172.31.14.131-1689646525078:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1916015537-2281 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/2071299855.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:64106@0x3a22d183-SendThread(127.0.0.1:64106) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: 1680326028@qtp-924848190-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1726500102_17 at /127.0.0.1:45544 [Receiving block BP-681486909-172.31.14.131-1689646525078:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=42149 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=42297 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/527ac157-3429-81c3-f99e-cf221451c37d/cluster_80d12f21-14ad-01d3-6749-7b372d00c374/dfs/data/data6/current/BP-681486909-172.31.14.131-1689646525078 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 41331 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: LeaseRenewer:jenkins.hfs.9@localhost:41331 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-681486909-172.31.14.131-1689646525078:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-16-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/MasterData-prefix:jenkins-hbase4.apache.org,34701,1689646525826 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=42297 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/42627-SendThread(127.0.0.1:64106) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp353748819-2253 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x34d303dd-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x34d303dd-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-18-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp774357224-2314 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42149 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_795273726_17 at /127.0.0.1:45522 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:53987@0x34a7cc7e-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RS-EventLoopGroup-9-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-26 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=34701 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Client (699281955) connection to localhost/127.0.0.1:41331 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: IPC Server handler 1 on default port 42627 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RS-EventLoopGroup-10-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 34679 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp679398534-2357 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:64106@0x127c9385-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: Listener at localhost/42627-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=32775 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940-prefix:jenkins-hbase4.apache.org,32775,1689646526307 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins@localhost:41331 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x34d303dd-metaLookup-shared--pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 41331 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:64106@0x127c9385 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/602809530.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_795273726_17 at /127.0.0.1:45572 [Receiving block BP-681486909-172.31.14.131-1689646525078:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp353748819-2251-acceptor-0@26e084b2-ServerConnector@50e3498a{HTTP/1.1, (http/1.1)}{0.0.0.0:36071} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:42297 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-12-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-11-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/42627.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: IPC Client (699281955) connection to localhost/127.0.0.1:41331 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/527ac157-3429-81c3-f99e-cf221451c37d/cluster_80d12f21-14ad-01d3-6749-7b372d00c374/dfs/data/data2) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@40c50e37 sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (699281955) connection to localhost/127.0.0.1:45369 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: pool-546-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp912985556-2624 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-551-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-15-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-681486909-172.31.14.131-1689646525078:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x34d303dd-shared-pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=32775 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/42627.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: pool-557-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1916015537-2283 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=42149 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/527ac157-3429-81c3-f99e-cf221451c37d/cluster_80d12f21-14ad-01d3-6749-7b372d00c374/dfs/data/data3/current/BP-681486909-172.31.14.131-1689646525078 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x552ab686-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp679398534-2358 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp976075850-2342-acceptor-0@28a767dd-ServerConnector@72f6fde0{HTTP/1.1, (http/1.1)}{0.0.0.0:46593} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-239ed2ad-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@59835f98 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp976075850-2344 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-13-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-566-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:64106@0x003ea3a7-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RS-EventLoopGroup-10-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 3 on default port 42627 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RS-EventLoopGroup-16-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-10-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@15e4756c[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=32775 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-16-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940-prefix:jenkins-hbase4.apache.org,42149,1689646526006 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=46217 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/42627-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=34701 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1916015537-2287 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 41331 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp353748819-2257 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_128455721_17 at /127.0.0.1:32984 [Receiving block BP-681486909-172.31.14.131-1689646525078:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/42627-SendThread(127.0.0.1:64106) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp774357224-2315 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 42627 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/527ac157-3429-81c3-f99e-cf221451c37d/cluster_80d12f21-14ad-01d3-6749-7b372d00c374/dfs/data/data5) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: qtp353748819-2254 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-681486909-172.31.14.131-1689646525078:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:53987@0x34a7cc7e-SendThread(127.0.0.1:53987) java.lang.Thread.sleep(Native Method) org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:369) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1137) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_795273726_17 at /127.0.0.1:34542 [Receiving block BP-681486909-172.31.14.131-1689646525078:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940-prefix:jenkins-hbase4.apache.org,46217,1689646526157 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 33349 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-13 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:2;jenkins-hbase4:32775-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=32775 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x34d303dd-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:42149Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 33349 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: Timer-32 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1497269373_17 at /127.0.0.1:32994 [Receiving block BP-681486909-172.31.14.131-1689646525078:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp774357224-2316 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42149 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-13-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42297 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/527ac157-3429-81c3-f99e-cf221451c37d/cluster_80d12f21-14ad-01d3-6749-7b372d00c374/dfs/data/data5/current/BP-681486909-172.31.14.131-1689646525078 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-11-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46217 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=42149 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: 2058842548@qtp-195320339-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: Timer-27 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46217 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataNode DiskChecker thread 1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 1387842992@qtp-195320339-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:41929 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-12 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 33349 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: Listener at localhost/42627-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=46217 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-15 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp976075850-2346 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-681486909-172.31.14.131-1689646525078:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:32775Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (699281955) connection to localhost/127.0.0.1:41331 from jenkins.hfs.7 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: qtp353748819-2255 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp976075850-2348 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp976075850-2343 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp912985556-2619-acceptor-0@63156885-ServerConnector@134f4aae{HTTP/1.1, (http/1.1)}{0.0.0.0:39647} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34701,1689646525826 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@69bdef2a sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:64106@0x3a22d183 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/602809530.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (699281955) connection to localhost/127.0.0.1:41331 from jenkins.hfs.8 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: LeaseRenewer:jenkins.hfs.4@localhost:45369 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp774357224-2317 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=42149 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS:1;jenkins-hbase4:46217-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-33 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp353748819-2252 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-681486909-172.31.14.131-1689646525078:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=34701 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Client (699281955) connection to localhost/127.0.0.1:45369 from jenkins.hfs.6 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: IPC Client (699281955) connection to localhost/127.0.0.1:41331 from jenkins.hfs.9 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: qtp1916015537-2284 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-681486909-172.31.14.131-1689646525078:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=46217 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-681486909-172.31.14.131-1689646525078:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@13fd759e[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1916015537-2282-acceptor-0@379c055-ServerConnector@6ba4dce4{HTTP/1.1, (http/1.1)}{0.0.0.0:42623} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/42627-SendThread(127.0.0.1:64106) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: IPC Server handler 4 on default port 34679 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: pool-553-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-5ca65b20-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x34d303dd-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=42149 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: LeaseRenewer:jenkins@localhost:45369 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/42627.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=42149 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-681486909-172.31.14.131-1689646525078:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-558-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 3 on default port 33349 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=42297 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor@164e4df3 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor.run(HeartbeatManager.java:451) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_795273726_17 at /127.0.0.1:45594 [Receiving block BP-681486909-172.31.14.131-1689646525078:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-562-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_795273726_17 at /127.0.0.1:34566 [Receiving block BP-681486909-172.31.14.131-1689646525078:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1916015537-2288 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp976075850-2345 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (699281955) connection to localhost/127.0.0.1:45369 from jenkins.hfs.5 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: Timer-28 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: jenkins-hbase4:34701 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.master.assignment.AssignmentManager.waitOnAssignQueue(AssignmentManager.java:2102) org.apache.hadoop.hbase.master.assignment.AssignmentManager.processAssignQueue(AssignmentManager.java:2124) org.apache.hadoop.hbase.master.assignment.AssignmentManager.access$600(AssignmentManager.java:104) org.apache.hadoop.hbase.master.assignment.AssignmentManager$1.run(AssignmentManager.java:2064) Potentially hanging thread: LeaseRenewer:jenkins.hfs.6@localhost:45369 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: BP-681486909-172.31.14.131-1689646525078 heartbeating to localhost/127.0.0.1:41331 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp679398534-2352 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/2071299855.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: BP-681486909-172.31.14.131-1689646525078 heartbeating to localhost/127.0.0.1:41331 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor@443e44f8 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:528) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/42627-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RS-EventLoopGroup-12-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp679398534-2355 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/2071299855.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@1d7abbee java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-15-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-14-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x34d303dd-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=42297 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=32775 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=46217 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp679398534-2359 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-31 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:64106@0x003ea3a7-SendThread(127.0.0.1:64106) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=32775 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) - Thread LEAK? -, OpenFileDescriptor=833 (was 822) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=381 (was 420), ProcessCount=170 (was 170), AvailableMemoryMB=4441 (was 4653) 2023-07-18 02:15:28,461 WARN [Listener at localhost/42627] hbase.ResourceChecker(130): Thread=554 is superior to 500 2023-07-18 02:15:28,484 INFO [Listener at localhost/42627] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNotMoveTableToNullRSGroupWhenCreatingExistingTable Thread=554, OpenFileDescriptor=833, MaxFileDescriptor=60000, SystemLoadAverage=381, ProcessCount=170, AvailableMemoryMB=4439 2023-07-18 02:15:28,484 WARN [Listener at localhost/42627] hbase.ResourceChecker(130): Thread=554 is superior to 500 2023-07-18 02:15:28,484 INFO [Listener at localhost/42627] rsgroup.TestRSGroupsBase(132): testNotMoveTableToNullRSGroupWhenCreatingExistingTable 2023-07-18 02:15:28,488 INFO [RS:3;jenkins-hbase4:42297] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C42297%2C1689646528125, suffix=, logDir=hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/WALs/jenkins-hbase4.apache.org,42297,1689646528125, archiveDir=hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/oldWALs, maxLogs=32 2023-07-18 02:15:28,488 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:28,488 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:28,489 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 02:15:28,489 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 02:15:28,489 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 02:15:28,490 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 02:15:28,490 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 02:15:28,491 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 02:15:28,494 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:28,494 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 02:15:28,496 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 02:15:28,498 INFO [Listener at localhost/42627] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 02:15:28,499 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 02:15:28,502 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:28,504 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 02:15:28,509 DEBUG [RS-EventLoopGroup-16-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35061,DS-c23070b9-4420-4118-9f01-dcf5c111c9ec,DISK] 2023-07-18 02:15:28,510 DEBUG [RS-EventLoopGroup-16-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39839,DS-1de6045e-347e-487b-a9d9-61a01cb59513,DISK] 2023-07-18 02:15:28,512 DEBUG [RS-EventLoopGroup-16-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37759,DS-3f9faa3e-7a97-475a-a68f-1dd43adc8a7e,DISK] 2023-07-18 02:15:28,513 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 02:15:28,515 INFO [RS:3;jenkins-hbase4:42297] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/WALs/jenkins-hbase4.apache.org,42297,1689646528125/jenkins-hbase4.apache.org%2C42297%2C1689646528125.1689646528488 2023-07-18 02:15:28,515 DEBUG [RS:3;jenkins-hbase4:42297] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35061,DS-c23070b9-4420-4118-9f01-dcf5c111c9ec,DISK], DatanodeInfoWithStorage[127.0.0.1:37759,DS-3f9faa3e-7a97-475a-a68f-1dd43adc8a7e,DISK], DatanodeInfoWithStorage[127.0.0.1:39839,DS-1de6045e-347e-487b-a9d9-61a01cb59513,DISK]] 2023-07-18 02:15:28,515 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 02:15:28,518 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:28,518 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:28,520 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34701] to rsgroup master 2023-07-18 02:15:28,520 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34701 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 02:15:28,520 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] ipc.CallRunner(144): callId: 48 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:47884 deadline: 1689647728520, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34701 is either offline or it does not exist. 2023-07-18 02:15:28,520 WARN [Listener at localhost/42627] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34701 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34701 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 02:15:28,522 INFO [Listener at localhost/42627] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 02:15:28,522 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:28,522 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:28,523 INFO [Listener at localhost/42627] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:32775, jenkins-hbase4.apache.org:42149, jenkins-hbase4.apache.org:42297, jenkins-hbase4.apache.org:46217], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 02:15:28,523 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 02:15:28,523 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 02:15:28,525 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 't1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 02:15:28,526 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=t1 2023-07-18 02:15:28,527 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 02:15:28,527 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "t1" procId is: 12 2023-07-18 02:15:28,528 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-18 02:15:28,529 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:28,529 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 02:15:28,530 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 02:15:28,532 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-18 02:15:28,533 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/.tmp/data/default/t1/2fb6002ffe2de71f5864dfae108a943c 2023-07-18 02:15:28,534 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/.tmp/data/default/t1/2fb6002ffe2de71f5864dfae108a943c empty. 2023-07-18 02:15:28,534 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/.tmp/data/default/t1/2fb6002ffe2de71f5864dfae108a943c 2023-07-18 02:15:28,534 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived t1 regions 2023-07-18 02:15:28,539 WARN [IPC Server handler 1 on default port 41331] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 3 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy and org.apache.hadoop.net.NetworkTopology 2023-07-18 02:15:28,540 WARN [IPC Server handler 1 on default port 41331] protocol.BlockStoragePolicy(161): Failed to place enough replicas: expected size is 1 but only 0 storage types can be selected (replication=3, selected=[], unavailable=[DISK], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}) 2023-07-18 02:15:28,540 WARN [IPC Server handler 1 on default port 41331] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 3 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable: unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]} 2023-07-18 02:15:28,549 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/.tmp/data/default/t1/.tabledesc/.tableinfo.0000000001 2023-07-18 02:15:28,551 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(7675): creating {ENCODED => 2fb6002ffe2de71f5864dfae108a943c, NAME => 't1,,1689646528525.2fb6002ffe2de71f5864dfae108a943c.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='t1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/.tmp 2023-07-18 02:15:28,566 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(866): Instantiated t1,,1689646528525.2fb6002ffe2de71f5864dfae108a943c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 02:15:28,566 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1604): Closing 2fb6002ffe2de71f5864dfae108a943c, disabling compactions & flushes 2023-07-18 02:15:28,566 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1626): Closing region t1,,1689646528525.2fb6002ffe2de71f5864dfae108a943c. 2023-07-18 02:15:28,566 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on t1,,1689646528525.2fb6002ffe2de71f5864dfae108a943c. 2023-07-18 02:15:28,566 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1714): Acquired close lock on t1,,1689646528525.2fb6002ffe2de71f5864dfae108a943c. after waiting 0 ms 2023-07-18 02:15:28,566 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1724): Updates disabled for region t1,,1689646528525.2fb6002ffe2de71f5864dfae108a943c. 2023-07-18 02:15:28,566 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1838): Closed t1,,1689646528525.2fb6002ffe2de71f5864dfae108a943c. 2023-07-18 02:15:28,566 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1558): Region close journal for 2fb6002ffe2de71f5864dfae108a943c: 2023-07-18 02:15:28,569 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_ADD_TO_META 2023-07-18 02:15:28,570 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"t1,,1689646528525.2fb6002ffe2de71f5864dfae108a943c.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689646528570"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689646528570"}]},"ts":"1689646528570"} 2023-07-18 02:15:28,572 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-18 02:15:28,575 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-18 02:15:28,575 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689646528575"}]},"ts":"1689646528575"} 2023-07-18 02:15:28,576 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=ENABLING in hbase:meta 2023-07-18 02:15:28,579 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 02:15:28,579 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 02:15:28,579 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 02:15:28,579 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 02:15:28,579 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-18 02:15:28,580 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 02:15:28,580 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=t1, region=2fb6002ffe2de71f5864dfae108a943c, ASSIGN}] 2023-07-18 02:15:28,581 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=t1, region=2fb6002ffe2de71f5864dfae108a943c, ASSIGN 2023-07-18 02:15:28,582 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=t1, region=2fb6002ffe2de71f5864dfae108a943c, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46217,1689646526157; forceNewPlan=false, retain=false 2023-07-18 02:15:28,629 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-18 02:15:28,732 INFO [jenkins-hbase4:34701] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-18 02:15:28,733 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=2fb6002ffe2de71f5864dfae108a943c, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46217,1689646526157 2023-07-18 02:15:28,734 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"t1,,1689646528525.2fb6002ffe2de71f5864dfae108a943c.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689646528733"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689646528733"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689646528733"}]},"ts":"1689646528733"} 2023-07-18 02:15:28,735 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=13, state=RUNNABLE; OpenRegionProcedure 2fb6002ffe2de71f5864dfae108a943c, server=jenkins-hbase4.apache.org,46217,1689646526157}] 2023-07-18 02:15:28,830 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-18 02:15:28,888 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,46217,1689646526157 2023-07-18 02:15:28,888 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-18 02:15:28,890 INFO [RS-EventLoopGroup-14-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:43744, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-18 02:15:28,893 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open t1,,1689646528525.2fb6002ffe2de71f5864dfae108a943c. 2023-07-18 02:15:28,894 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 2fb6002ffe2de71f5864dfae108a943c, NAME => 't1,,1689646528525.2fb6002ffe2de71f5864dfae108a943c.', STARTKEY => '', ENDKEY => ''} 2023-07-18 02:15:28,894 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table t1 2fb6002ffe2de71f5864dfae108a943c 2023-07-18 02:15:28,894 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated t1,,1689646528525.2fb6002ffe2de71f5864dfae108a943c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 02:15:28,894 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 2fb6002ffe2de71f5864dfae108a943c 2023-07-18 02:15:28,894 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 2fb6002ffe2de71f5864dfae108a943c 2023-07-18 02:15:28,895 INFO [StoreOpener-2fb6002ffe2de71f5864dfae108a943c-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family cf1 of region 2fb6002ffe2de71f5864dfae108a943c 2023-07-18 02:15:28,896 DEBUG [StoreOpener-2fb6002ffe2de71f5864dfae108a943c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/data/default/t1/2fb6002ffe2de71f5864dfae108a943c/cf1 2023-07-18 02:15:28,896 DEBUG [StoreOpener-2fb6002ffe2de71f5864dfae108a943c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/data/default/t1/2fb6002ffe2de71f5864dfae108a943c/cf1 2023-07-18 02:15:28,897 INFO [StoreOpener-2fb6002ffe2de71f5864dfae108a943c-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 2fb6002ffe2de71f5864dfae108a943c columnFamilyName cf1 2023-07-18 02:15:28,897 INFO [StoreOpener-2fb6002ffe2de71f5864dfae108a943c-1] regionserver.HStore(310): Store=2fb6002ffe2de71f5864dfae108a943c/cf1, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 02:15:28,898 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/data/default/t1/2fb6002ffe2de71f5864dfae108a943c 2023-07-18 02:15:28,898 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/data/default/t1/2fb6002ffe2de71f5864dfae108a943c 2023-07-18 02:15:28,901 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 2fb6002ffe2de71f5864dfae108a943c 2023-07-18 02:15:28,904 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/data/default/t1/2fb6002ffe2de71f5864dfae108a943c/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 02:15:28,905 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 2fb6002ffe2de71f5864dfae108a943c; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11911267200, jitterRate=0.10932320356369019}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 02:15:28,905 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 2fb6002ffe2de71f5864dfae108a943c: 2023-07-18 02:15:28,906 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for t1,,1689646528525.2fb6002ffe2de71f5864dfae108a943c., pid=14, masterSystemTime=1689646528888 2023-07-18 02:15:28,912 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for t1,,1689646528525.2fb6002ffe2de71f5864dfae108a943c. 2023-07-18 02:15:28,913 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened t1,,1689646528525.2fb6002ffe2de71f5864dfae108a943c. 2023-07-18 02:15:28,913 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=2fb6002ffe2de71f5864dfae108a943c, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46217,1689646526157 2023-07-18 02:15:28,913 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"t1,,1689646528525.2fb6002ffe2de71f5864dfae108a943c.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689646528913"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689646528913"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689646528913"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689646528913"}]},"ts":"1689646528913"} 2023-07-18 02:15:28,917 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=13 2023-07-18 02:15:28,917 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=13, state=SUCCESS; OpenRegionProcedure 2fb6002ffe2de71f5864dfae108a943c, server=jenkins-hbase4.apache.org,46217,1689646526157 in 180 msec 2023-07-18 02:15:28,919 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-18 02:15:28,919 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=t1, region=2fb6002ffe2de71f5864dfae108a943c, ASSIGN in 337 msec 2023-07-18 02:15:28,920 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-18 02:15:28,920 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689646528920"}]},"ts":"1689646528920"} 2023-07-18 02:15:28,921 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=ENABLED in hbase:meta 2023-07-18 02:15:28,925 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_POST_OPERATION 2023-07-18 02:15:28,927 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=t1 in 400 msec 2023-07-18 02:15:29,131 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-18 02:15:29,132 INFO [Listener at localhost/42627] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:t1, procId: 12 completed 2023-07-18 02:15:29,132 DEBUG [Listener at localhost/42627] hbase.HBaseTestingUtility(3430): Waiting until all regions of table t1 get assigned. Timeout = 60000ms 2023-07-18 02:15:29,132 INFO [Listener at localhost/42627] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 02:15:29,134 INFO [Listener at localhost/42627] hbase.HBaseTestingUtility(3484): All regions for table t1 assigned to meta. Checking AM states. 2023-07-18 02:15:29,134 INFO [Listener at localhost/42627] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 02:15:29,134 INFO [Listener at localhost/42627] hbase.HBaseTestingUtility(3504): All regions for table t1 assigned. 2023-07-18 02:15:29,136 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 't1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 02:15:29,137 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] procedure2.ProcedureExecutor(1029): Stored pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=t1 2023-07-18 02:15:29,139 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 02:15:29,139 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.TableExistsException: t1 at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.prepareCreate(CreateTableProcedure.java:243) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:85) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:53) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:188) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:922) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1646) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1392) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$1100(ProcedureExecutor.java:73) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1964) 2023-07-18 02:15:29,140 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] ipc.CallRunner(144): callId: 65 service: MasterService methodName: CreateTable size: 353 connection: 172.31.14.131:47884 deadline: 1689646589136, exception=org.apache.hadoop.hbase.TableExistsException: t1 2023-07-18 02:15:29,141 INFO [Listener at localhost/42627] hbase.Waiter(180): Waiting up to [5,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 02:15:29,142 INFO [PEWorker-1] procedure2.ProcedureExecutor(1528): Rolled back pid=15, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.TableExistsException via master-create-table:org.apache.hadoop.hbase.TableExistsException: t1; CreateTableProcedure table=t1 exec-time=6 msec 2023-07-18 02:15:29,242 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 02:15:29,242 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 02:15:29,243 INFO [Listener at localhost/42627] client.HBaseAdmin$15(890): Started disable of t1 2023-07-18 02:15:29,244 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable t1 2023-07-18 02:15:29,245 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] procedure2.ProcedureExecutor(1029): Stored pid=16, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=t1 2023-07-18 02:15:29,248 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689646529248"}]},"ts":"1689646529248"} 2023-07-18 02:15:29,249 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=DISABLING in hbase:meta 2023-07-18 02:15:29,250 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-18 02:15:29,251 INFO [PEWorker-2] procedure.DisableTableProcedure(293): Set t1 to state=DISABLING 2023-07-18 02:15:29,251 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=t1, region=2fb6002ffe2de71f5864dfae108a943c, UNASSIGN}] 2023-07-18 02:15:29,252 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=t1, region=2fb6002ffe2de71f5864dfae108a943c, UNASSIGN 2023-07-18 02:15:29,253 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=2fb6002ffe2de71f5864dfae108a943c, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,46217,1689646526157 2023-07-18 02:15:29,253 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"t1,,1689646528525.2fb6002ffe2de71f5864dfae108a943c.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689646529253"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689646529253"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689646529253"}]},"ts":"1689646529253"} 2023-07-18 02:15:29,254 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=17, state=RUNNABLE; CloseRegionProcedure 2fb6002ffe2de71f5864dfae108a943c, server=jenkins-hbase4.apache.org,46217,1689646526157}] 2023-07-18 02:15:29,351 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-18 02:15:29,406 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 2fb6002ffe2de71f5864dfae108a943c 2023-07-18 02:15:29,407 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 2fb6002ffe2de71f5864dfae108a943c, disabling compactions & flushes 2023-07-18 02:15:29,407 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region t1,,1689646528525.2fb6002ffe2de71f5864dfae108a943c. 2023-07-18 02:15:29,407 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on t1,,1689646528525.2fb6002ffe2de71f5864dfae108a943c. 2023-07-18 02:15:29,407 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on t1,,1689646528525.2fb6002ffe2de71f5864dfae108a943c. after waiting 0 ms 2023-07-18 02:15:29,407 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region t1,,1689646528525.2fb6002ffe2de71f5864dfae108a943c. 2023-07-18 02:15:29,411 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/data/default/t1/2fb6002ffe2de71f5864dfae108a943c/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 02:15:29,411 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed t1,,1689646528525.2fb6002ffe2de71f5864dfae108a943c. 2023-07-18 02:15:29,411 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 2fb6002ffe2de71f5864dfae108a943c: 2023-07-18 02:15:29,413 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 2fb6002ffe2de71f5864dfae108a943c 2023-07-18 02:15:29,413 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=2fb6002ffe2de71f5864dfae108a943c, regionState=CLOSED 2023-07-18 02:15:29,414 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"t1,,1689646528525.2fb6002ffe2de71f5864dfae108a943c.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689646529413"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689646529413"}]},"ts":"1689646529413"} 2023-07-18 02:15:29,417 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=17 2023-07-18 02:15:29,417 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=17, state=SUCCESS; CloseRegionProcedure 2fb6002ffe2de71f5864dfae108a943c, server=jenkins-hbase4.apache.org,46217,1689646526157 in 161 msec 2023-07-18 02:15:29,419 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=16 2023-07-18 02:15:29,419 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=16, state=SUCCESS; TransitRegionStateProcedure table=t1, region=2fb6002ffe2de71f5864dfae108a943c, UNASSIGN in 166 msec 2023-07-18 02:15:29,420 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689646529420"}]},"ts":"1689646529420"} 2023-07-18 02:15:29,421 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=DISABLED in hbase:meta 2023-07-18 02:15:29,423 INFO [PEWorker-2] procedure.DisableTableProcedure(305): Set t1 to state=DISABLED 2023-07-18 02:15:29,424 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=16, state=SUCCESS; DisableTableProcedure table=t1 in 179 msec 2023-07-18 02:15:29,552 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-18 02:15:29,553 INFO [Listener at localhost/42627] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:t1, procId: 16 completed 2023-07-18 02:15:29,553 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete t1 2023-07-18 02:15:29,554 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] procedure2.ProcedureExecutor(1029): Stored pid=19, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=t1 2023-07-18 02:15:29,556 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=19, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=t1 2023-07-18 02:15:29,556 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 't1' from rsgroup 'default' 2023-07-18 02:15:29,557 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=19, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=t1 2023-07-18 02:15:29,558 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:29,558 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 02:15:29,559 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 02:15:29,560 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/.tmp/data/default/t1/2fb6002ffe2de71f5864dfae108a943c 2023-07-18 02:15:29,561 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-18 02:15:29,561 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/.tmp/data/default/t1/2fb6002ffe2de71f5864dfae108a943c/cf1, FileablePath, hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/.tmp/data/default/t1/2fb6002ffe2de71f5864dfae108a943c/recovered.edits] 2023-07-18 02:15:29,566 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/.tmp/data/default/t1/2fb6002ffe2de71f5864dfae108a943c/recovered.edits/4.seqid to hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/archive/data/default/t1/2fb6002ffe2de71f5864dfae108a943c/recovered.edits/4.seqid 2023-07-18 02:15:29,566 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/.tmp/data/default/t1/2fb6002ffe2de71f5864dfae108a943c 2023-07-18 02:15:29,566 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived t1 regions 2023-07-18 02:15:29,568 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=19, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=t1 2023-07-18 02:15:29,570 WARN [PEWorker-5] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of t1 from hbase:meta 2023-07-18 02:15:29,571 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(421): Removing 't1' descriptor. 2023-07-18 02:15:29,572 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=19, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=t1 2023-07-18 02:15:29,572 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(411): Removing 't1' from region states. 2023-07-18 02:15:29,572 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"t1,,1689646528525.2fb6002ffe2de71f5864dfae108a943c.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689646529572"}]},"ts":"9223372036854775807"} 2023-07-18 02:15:29,573 INFO [PEWorker-5] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-18 02:15:29,573 DEBUG [PEWorker-5] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 2fb6002ffe2de71f5864dfae108a943c, NAME => 't1,,1689646528525.2fb6002ffe2de71f5864dfae108a943c.', STARTKEY => '', ENDKEY => ''}] 2023-07-18 02:15:29,573 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(415): Marking 't1' as deleted. 2023-07-18 02:15:29,574 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689646529574"}]},"ts":"9223372036854775807"} 2023-07-18 02:15:29,575 INFO [PEWorker-5] hbase.MetaTableAccessor(1658): Deleted table t1 state from META 2023-07-18 02:15:29,577 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(130): Finished pid=19, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=t1 2023-07-18 02:15:29,578 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=19, state=SUCCESS; DeleteTableProcedure table=t1 in 25 msec 2023-07-18 02:15:29,662 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-18 02:15:29,662 INFO [Listener at localhost/42627] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:t1, procId: 19 completed 2023-07-18 02:15:29,665 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:29,665 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:29,666 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 02:15:29,666 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 02:15:29,666 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 02:15:29,667 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 02:15:29,667 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 02:15:29,667 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 02:15:29,670 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:29,670 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 02:15:29,674 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 02:15:29,676 INFO [Listener at localhost/42627] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 02:15:29,677 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 02:15:29,679 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:29,679 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 02:15:29,680 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 02:15:29,683 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 02:15:29,685 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:29,685 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:29,687 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34701] to rsgroup master 2023-07-18 02:15:29,687 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34701 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 02:15:29,687 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] ipc.CallRunner(144): callId: 105 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:47884 deadline: 1689647729687, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34701 is either offline or it does not exist. 2023-07-18 02:15:29,687 WARN [Listener at localhost/42627] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34701 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34701 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 02:15:29,691 INFO [Listener at localhost/42627] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 02:15:29,691 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:29,691 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:29,692 INFO [Listener at localhost/42627] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:32775, jenkins-hbase4.apache.org:42149, jenkins-hbase4.apache.org:42297, jenkins-hbase4.apache.org:46217], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 02:15:29,692 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 02:15:29,692 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 02:15:29,710 INFO [Listener at localhost/42627] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNotMoveTableToNullRSGroupWhenCreatingExistingTable Thread=570 (was 554) - Thread LEAK? -, OpenFileDescriptor=844 (was 833) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=381 (was 381), ProcessCount=170 (was 170), AvailableMemoryMB=4431 (was 4439) 2023-07-18 02:15:29,710 WARN [Listener at localhost/42627] hbase.ResourceChecker(130): Thread=570 is superior to 500 2023-07-18 02:15:29,726 INFO [Listener at localhost/42627] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNonExistentTableMove Thread=570, OpenFileDescriptor=844, MaxFileDescriptor=60000, SystemLoadAverage=381, ProcessCount=170, AvailableMemoryMB=4431 2023-07-18 02:15:29,726 WARN [Listener at localhost/42627] hbase.ResourceChecker(130): Thread=570 is superior to 500 2023-07-18 02:15:29,727 INFO [Listener at localhost/42627] rsgroup.TestRSGroupsBase(132): testNonExistentTableMove 2023-07-18 02:15:29,730 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:29,730 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:29,731 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 02:15:29,731 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 02:15:29,731 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 02:15:29,732 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 02:15:29,732 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 02:15:29,733 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 02:15:29,736 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:29,736 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 02:15:29,737 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 02:15:29,739 INFO [Listener at localhost/42627] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 02:15:29,739 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 02:15:29,741 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:29,741 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 02:15:29,743 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 02:15:29,744 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 02:15:29,746 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:29,746 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:29,748 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34701] to rsgroup master 2023-07-18 02:15:29,748 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34701 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 02:15:29,748 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] ipc.CallRunner(144): callId: 133 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:47884 deadline: 1689647729748, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34701 is either offline or it does not exist. 2023-07-18 02:15:29,749 WARN [Listener at localhost/42627] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34701 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34701 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 02:15:29,750 INFO [Listener at localhost/42627] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 02:15:29,751 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:29,751 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:29,751 INFO [Listener at localhost/42627] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:32775, jenkins-hbase4.apache.org:42149, jenkins-hbase4.apache.org:42297, jenkins-hbase4.apache.org:46217], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 02:15:29,752 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 02:15:29,752 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 02:15:29,753 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestNonExistentTableMove 2023-07-18 02:15:29,753 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 02:15:29,754 INFO [Listener at localhost/42627] rsgroup.TestRSGroupsAdmin1(389): Moving table GrouptestNonExistentTableMove to default 2023-07-18 02:15:29,759 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestNonExistentTableMove 2023-07-18 02:15:29,759 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 02:15:29,762 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:29,762 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:29,763 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 02:15:29,763 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 02:15:29,763 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 02:15:29,763 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 02:15:29,764 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 02:15:29,764 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 02:15:29,767 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:29,767 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 02:15:29,769 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 02:15:29,771 INFO [Listener at localhost/42627] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 02:15:29,772 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 02:15:29,773 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:29,774 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 02:15:29,775 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 02:15:29,776 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 02:15:29,778 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:29,778 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:29,780 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34701] to rsgroup master 2023-07-18 02:15:29,780 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34701 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 02:15:29,780 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] ipc.CallRunner(144): callId: 168 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:47884 deadline: 1689647729780, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34701 is either offline or it does not exist. 2023-07-18 02:15:29,780 WARN [Listener at localhost/42627] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34701 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34701 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 02:15:29,782 INFO [Listener at localhost/42627] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 02:15:29,783 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:29,783 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:29,783 INFO [Listener at localhost/42627] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:32775, jenkins-hbase4.apache.org:42149, jenkins-hbase4.apache.org:42297, jenkins-hbase4.apache.org:46217], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 02:15:29,783 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 02:15:29,784 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 02:15:29,803 INFO [Listener at localhost/42627] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNonExistentTableMove Thread=572 (was 570) - Thread LEAK? -, OpenFileDescriptor=844 (was 844), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=381 (was 381), ProcessCount=170 (was 170), AvailableMemoryMB=4431 (was 4431) 2023-07-18 02:15:29,803 WARN [Listener at localhost/42627] hbase.ResourceChecker(130): Thread=572 is superior to 500 2023-07-18 02:15:29,820 INFO [Listener at localhost/42627] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testGroupInfoMultiAccessing Thread=572, OpenFileDescriptor=844, MaxFileDescriptor=60000, SystemLoadAverage=381, ProcessCount=170, AvailableMemoryMB=4430 2023-07-18 02:15:29,820 WARN [Listener at localhost/42627] hbase.ResourceChecker(130): Thread=572 is superior to 500 2023-07-18 02:15:29,820 INFO [Listener at localhost/42627] rsgroup.TestRSGroupsBase(132): testGroupInfoMultiAccessing 2023-07-18 02:15:29,824 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:29,824 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:29,825 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 02:15:29,825 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 02:15:29,825 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 02:15:29,825 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 02:15:29,825 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 02:15:29,826 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 02:15:29,829 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:29,829 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 02:15:29,830 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 02:15:29,832 INFO [Listener at localhost/42627] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 02:15:29,833 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 02:15:29,835 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:29,835 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 02:15:29,838 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 02:15:29,839 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 02:15:29,841 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:29,841 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:29,843 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34701] to rsgroup master 2023-07-18 02:15:29,843 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34701 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 02:15:29,843 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] ipc.CallRunner(144): callId: 196 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:47884 deadline: 1689647729842, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34701 is either offline or it does not exist. 2023-07-18 02:15:29,843 WARN [Listener at localhost/42627] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34701 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34701 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 02:15:29,845 INFO [Listener at localhost/42627] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 02:15:29,845 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:29,845 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:29,846 INFO [Listener at localhost/42627] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:32775, jenkins-hbase4.apache.org:42149, jenkins-hbase4.apache.org:42297, jenkins-hbase4.apache.org:46217], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 02:15:29,846 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 02:15:29,846 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 02:15:29,849 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:29,849 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:29,850 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 02:15:29,850 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 02:15:29,850 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 02:15:29,851 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 02:15:29,851 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 02:15:29,851 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 02:15:29,854 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:29,854 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 02:15:29,858 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 02:15:29,860 INFO [Listener at localhost/42627] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 02:15:29,860 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 02:15:29,862 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:29,862 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 02:15:29,865 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 02:15:29,866 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 02:15:29,867 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:29,867 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:29,869 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34701] to rsgroup master 2023-07-18 02:15:29,869 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34701 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 02:15:29,869 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] ipc.CallRunner(144): callId: 224 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:47884 deadline: 1689647729869, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34701 is either offline or it does not exist. 2023-07-18 02:15:29,870 WARN [Listener at localhost/42627] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34701 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34701 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 02:15:29,871 INFO [Listener at localhost/42627] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 02:15:29,872 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:29,872 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:29,872 INFO [Listener at localhost/42627] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:32775, jenkins-hbase4.apache.org:42149, jenkins-hbase4.apache.org:42297, jenkins-hbase4.apache.org:46217], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 02:15:29,873 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 02:15:29,873 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 02:15:29,891 INFO [Listener at localhost/42627] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testGroupInfoMultiAccessing Thread=573 (was 572) - Thread LEAK? -, OpenFileDescriptor=844 (was 844), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=381 (was 381), ProcessCount=170 (was 170), AvailableMemoryMB=4430 (was 4430) 2023-07-18 02:15:29,891 WARN [Listener at localhost/42627] hbase.ResourceChecker(130): Thread=573 is superior to 500 2023-07-18 02:15:29,908 INFO [Listener at localhost/42627] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNamespaceConstraint Thread=573, OpenFileDescriptor=844, MaxFileDescriptor=60000, SystemLoadAverage=381, ProcessCount=170, AvailableMemoryMB=4429 2023-07-18 02:15:29,908 WARN [Listener at localhost/42627] hbase.ResourceChecker(130): Thread=573 is superior to 500 2023-07-18 02:15:29,908 INFO [Listener at localhost/42627] rsgroup.TestRSGroupsBase(132): testNamespaceConstraint 2023-07-18 02:15:29,911 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:29,912 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:29,912 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 02:15:29,912 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 02:15:29,912 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 02:15:29,913 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 02:15:29,913 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 02:15:29,914 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 02:15:29,916 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:29,917 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 02:15:29,918 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 02:15:29,920 INFO [Listener at localhost/42627] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 02:15:29,920 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 02:15:29,922 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:29,922 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 02:15:29,923 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 02:15:29,926 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 02:15:29,928 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:29,928 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:29,929 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34701] to rsgroup master 2023-07-18 02:15:29,930 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34701 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 02:15:29,930 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] ipc.CallRunner(144): callId: 252 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:47884 deadline: 1689647729929, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34701 is either offline or it does not exist. 2023-07-18 02:15:29,930 WARN [Listener at localhost/42627] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34701 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34701 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 02:15:29,932 INFO [Listener at localhost/42627] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 02:15:29,932 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:29,932 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:29,932 INFO [Listener at localhost/42627] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:32775, jenkins-hbase4.apache.org:42149, jenkins-hbase4.apache.org:42297, jenkins-hbase4.apache.org:46217], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 02:15:29,933 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 02:15:29,933 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 02:15:29,933 INFO [Listener at localhost/42627] rsgroup.TestRSGroupsAdmin1(154): testNamespaceConstraint 2023-07-18 02:15:29,934 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_foo 2023-07-18 02:15:29,935 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_foo 2023-07-18 02:15:29,937 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:29,937 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 02:15:29,937 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 02:15:29,938 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 02:15:29,940 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:29,940 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:29,942 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.HMaster$15(3014): Client=jenkins//172.31.14.131 creating {NAME => 'Group_foo', hbase.rsgroup.name => 'Group_foo'} 2023-07-18 02:15:29,947 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] procedure2.ProcedureExecutor(1029): Stored pid=20, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=Group_foo 2023-07-18 02:15:29,950 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-18 02:15:29,954 DEBUG [Listener at localhost/42627-EventThread] zookeeper.ZKWatcher(600): master:34701-0x101763670720000, quorum=127.0.0.1:64106, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-18 02:15:29,958 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=20, state=SUCCESS; CreateNamespaceProcedure, namespace=Group_foo in 14 msec 2023-07-18 02:15:30,051 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-18 02:15:30,052 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_foo 2023-07-18 02:15:30,054 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup Group_foo is referenced by namespace: Group_foo at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:504) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 02:15:30,054 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] ipc.CallRunner(144): callId: 268 service: MasterService methodName: ExecMasterService size: 91 connection: 172.31.14.131:47884 deadline: 1689647730052, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup Group_foo is referenced by namespace: Group_foo 2023-07-18 02:15:30,059 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.HMaster$16(3053): Client=jenkins//172.31.14.131 modify {NAME => 'Group_foo', hbase.rsgroup.name => 'Group_foo'} 2023-07-18 02:15:30,064 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] procedure2.ProcedureExecutor(1029): Stored pid=21, state=RUNNABLE:MODIFY_NAMESPACE_PREPARE; ModifyNamespaceProcedure, namespace=Group_foo 2023-07-18 02:15:30,070 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-18 02:15:30,072 DEBUG [Listener at localhost/42627-EventThread] zookeeper.ZKWatcher(600): master:34701-0x101763670720000, quorum=127.0.0.1:64106, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/Group_foo 2023-07-18 02:15:30,073 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=21, state=SUCCESS; ModifyNamespaceProcedure, namespace=Group_foo in 13 msec 2023-07-18 02:15:30,171 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-18 02:15:30,172 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_anotherGroup 2023-07-18 02:15:30,174 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_anotherGroup 2023-07-18 02:15:30,176 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:30,176 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_foo 2023-07-18 02:15:30,176 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 02:15:30,177 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-18 02:15:30,186 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 02:15:30,188 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:30,189 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:30,191 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.HMaster$17(3086): Client=jenkins//172.31.14.131 delete Group_foo 2023-07-18 02:15:30,191 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] procedure2.ProcedureExecutor(1029): Stored pid=22, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-18 02:15:30,193 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-18 02:15:30,196 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-18 02:15:30,196 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.MasterRpcServices(1230): Checking to see if procedure is done pid=22 2023-07-18 02:15:30,197 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-18 02:15:30,198 DEBUG [Listener at localhost/42627-EventThread] zookeeper.ZKWatcher(600): master:34701-0x101763670720000, quorum=127.0.0.1:64106, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/Group_foo 2023-07-18 02:15:30,198 DEBUG [Listener at localhost/42627-EventThread] zookeeper.ZKWatcher(600): master:34701-0x101763670720000, quorum=127.0.0.1:64106, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-18 02:15:30,199 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-18 02:15:30,200 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-18 02:15:30,201 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=22, state=SUCCESS; DeleteNamespaceProcedure, namespace=Group_foo in 9 msec 2023-07-18 02:15:30,297 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.MasterRpcServices(1230): Checking to see if procedure is done pid=22 2023-07-18 02:15:30,297 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_foo 2023-07-18 02:15:30,301 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_anotherGroup 2023-07-18 02:15:30,301 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:30,302 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 02:15:30,302 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-18 02:15:30,304 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 02:15:30,306 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:30,306 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:30,308 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Region server group foo does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint.preCreateNamespace(RSGroupAdminEndpoint.java:591) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$1.call(MasterCoprocessorHost.java:222) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$1.call(MasterCoprocessorHost.java:219) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost$ObserverOperationWithoutResult.callObserver(CoprocessorHost.java:558) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:631) at org.apache.hadoop.hbase.master.MasterCoprocessorHost.preCreateNamespace(MasterCoprocessorHost.java:219) at org.apache.hadoop.hbase.master.HMaster$15.run(HMaster.java:3010) at org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:132) at org.apache.hadoop.hbase.master.HMaster.createNamespace(HMaster.java:3007) at org.apache.hadoop.hbase.master.MasterRpcServices.createNamespace(MasterRpcServices.java:684) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 02:15:30,308 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] ipc.CallRunner(144): callId: 290 service: MasterService methodName: CreateNamespace size: 70 connection: 172.31.14.131:47884 deadline: 1689646590308, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Region server group foo does not exist. 2023-07-18 02:15:30,311 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:30,311 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:30,312 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 02:15:30,312 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 02:15:30,312 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 02:15:30,313 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 02:15:30,313 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 02:15:30,313 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_anotherGroup 2023-07-18 02:15:30,316 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:30,317 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 02:15:30,317 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-18 02:15:30,318 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 02:15:30,319 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 02:15:30,319 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 02:15:30,319 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 02:15:30,320 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 02:15:30,320 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 02:15:30,321 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 02:15:30,323 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:30,324 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 02:15:30,325 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 02:15:30,328 INFO [Listener at localhost/42627] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 02:15:30,328 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 02:15:30,330 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 02:15:30,330 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 02:15:30,331 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 02:15:30,332 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 02:15:30,334 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:30,334 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:30,336 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34701] to rsgroup master 2023-07-18 02:15:30,336 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34701 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 02:15:30,336 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] ipc.CallRunner(144): callId: 320 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:47884 deadline: 1689647730336, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34701 is either offline or it does not exist. 2023-07-18 02:15:30,336 WARN [Listener at localhost/42627] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34701 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34701 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 02:15:30,338 INFO [Listener at localhost/42627] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 02:15:30,339 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 02:15:30,339 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 02:15:30,339 INFO [Listener at localhost/42627] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:32775, jenkins-hbase4.apache.org:42149, jenkins-hbase4.apache.org:42297, jenkins-hbase4.apache.org:46217], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 02:15:30,340 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 02:15:30,340 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34701] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 02:15:30,380 INFO [Listener at localhost/42627] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNamespaceConstraint Thread=573 (was 573), OpenFileDescriptor=844 (was 844), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=381 (was 381), ProcessCount=170 (was 170), AvailableMemoryMB=4426 (was 4429) 2023-07-18 02:15:30,380 WARN [Listener at localhost/42627] hbase.ResourceChecker(130): Thread=573 is superior to 500 2023-07-18 02:15:30,380 INFO [Listener at localhost/42627] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-18 02:15:30,380 INFO [Listener at localhost/42627] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-18 02:15:30,380 DEBUG [Listener at localhost/42627] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x4875d50c to 127.0.0.1:64106 2023-07-18 02:15:30,380 DEBUG [Listener at localhost/42627] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 02:15:30,381 DEBUG [Listener at localhost/42627] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-18 02:15:30,381 DEBUG [Listener at localhost/42627] util.JVMClusterUtil(257): Found active master hash=1214557593, stopped=false 2023-07-18 02:15:30,381 DEBUG [Listener at localhost/42627] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-18 02:15:30,381 DEBUG [Listener at localhost/42627] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-18 02:15:30,381 INFO [Listener at localhost/42627] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,34701,1689646525826 2023-07-18 02:15:30,384 DEBUG [Listener at localhost/42627-EventThread] zookeeper.ZKWatcher(600): regionserver:42297-0x10176367072000b, quorum=127.0.0.1:64106, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-18 02:15:30,384 INFO [Listener at localhost/42627] procedure2.ProcedureExecutor(629): Stopping 2023-07-18 02:15:30,384 DEBUG [Listener at localhost/42627-EventThread] zookeeper.ZKWatcher(600): master:34701-0x101763670720000, quorum=127.0.0.1:64106, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-18 02:15:30,384 DEBUG [Listener at localhost/42627-EventThread] zookeeper.ZKWatcher(600): master:34701-0x101763670720000, quorum=127.0.0.1:64106, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 02:15:30,384 DEBUG [Listener at localhost/42627-EventThread] zookeeper.ZKWatcher(600): regionserver:42149-0x101763670720001, quorum=127.0.0.1:64106, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-18 02:15:30,384 DEBUG [Listener at localhost/42627-EventThread] zookeeper.ZKWatcher(600): regionserver:46217-0x101763670720002, quorum=127.0.0.1:64106, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-18 02:15:30,384 DEBUG [Listener at localhost/42627-EventThread] zookeeper.ZKWatcher(600): regionserver:32775-0x101763670720003, quorum=127.0.0.1:64106, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-18 02:15:30,385 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:42297-0x10176367072000b, quorum=127.0.0.1:64106, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 02:15:30,385 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:34701-0x101763670720000, quorum=127.0.0.1:64106, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 02:15:30,385 DEBUG [Listener at localhost/42627] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x127c9385 to 127.0.0.1:64106 2023-07-18 02:15:30,385 DEBUG [Listener at localhost/42627] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 02:15:30,385 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:42149-0x101763670720001, quorum=127.0.0.1:64106, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 02:15:30,385 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:46217-0x101763670720002, quorum=127.0.0.1:64106, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 02:15:30,386 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:32775-0x101763670720003, quorum=127.0.0.1:64106, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 02:15:30,386 INFO [RS:0;jenkins-hbase4:42149] regionserver.HRegionServer(1064): Closing user regions 2023-07-18 02:15:30,386 INFO [Listener at localhost/42627] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,42149,1689646526006' ***** 2023-07-18 02:15:30,386 INFO [RS:0;jenkins-hbase4:42149] regionserver.HRegionServer(3305): Received CLOSE for 4ff67b2ac9c9087f40b9b252696553d5 2023-07-18 02:15:30,386 INFO [Listener at localhost/42627] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-18 02:15:30,386 INFO [Listener at localhost/42627] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,46217,1689646526157' ***** 2023-07-18 02:15:30,386 INFO [Listener at localhost/42627] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-18 02:15:30,386 INFO [Listener at localhost/42627] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,32775,1689646526307' ***** 2023-07-18 02:15:30,386 INFO [Listener at localhost/42627] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-18 02:15:30,386 INFO [RS:1;jenkins-hbase4:46217] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-18 02:15:30,386 INFO [RS:2;jenkins-hbase4:32775] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-18 02:15:30,386 INFO [Listener at localhost/42627] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,42297,1689646528125' ***** 2023-07-18 02:15:30,388 INFO [Listener at localhost/42627] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-18 02:15:30,388 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 4ff67b2ac9c9087f40b9b252696553d5, disabling compactions & flushes 2023-07-18 02:15:30,388 INFO [RS:3;jenkins-hbase4:42297] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-18 02:15:30,390 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689646527238.4ff67b2ac9c9087f40b9b252696553d5. 2023-07-18 02:15:30,390 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689646527238.4ff67b2ac9c9087f40b9b252696553d5. 2023-07-18 02:15:30,390 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689646527238.4ff67b2ac9c9087f40b9b252696553d5. after waiting 0 ms 2023-07-18 02:15:30,391 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689646527238.4ff67b2ac9c9087f40b9b252696553d5. 2023-07-18 02:15:30,391 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 4ff67b2ac9c9087f40b9b252696553d5 1/1 column families, dataSize=6.43 KB heapSize=10.63 KB 2023-07-18 02:15:30,393 INFO [RS:0;jenkins-hbase4:42149] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-18 02:15:30,393 INFO [RS:1;jenkins-hbase4:46217] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@1f4b851f{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-18 02:15:30,394 INFO [RS:3;jenkins-hbase4:42297] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@3e8cf2fc{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-18 02:15:30,393 INFO [RS:2;jenkins-hbase4:32775] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@72c3dca2{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-18 02:15:30,396 INFO [RS:3;jenkins-hbase4:42297] server.AbstractConnector(383): Stopped ServerConnector@134f4aae{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 02:15:30,396 INFO [RS:1;jenkins-hbase4:46217] server.AbstractConnector(383): Stopped ServerConnector@6d04da20{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 02:15:30,396 INFO [RS:1;jenkins-hbase4:46217] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-18 02:15:30,396 INFO [RS:3;jenkins-hbase4:42297] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-18 02:15:30,397 INFO [RS:0;jenkins-hbase4:42149] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@5d16009b{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-18 02:15:30,398 INFO [RS:3;jenkins-hbase4:42297] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2d37a2b0{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-18 02:15:30,397 INFO [RS:1;jenkins-hbase4:46217] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@42cd0009{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-18 02:15:30,399 INFO [RS:3;jenkins-hbase4:42297] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7d3c5a39{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/527ac157-3429-81c3-f99e-cf221451c37d/hadoop.log.dir/,STOPPED} 2023-07-18 02:15:30,396 INFO [RS:2;jenkins-hbase4:32775] server.AbstractConnector(383): Stopped ServerConnector@72f6fde0{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 02:15:30,400 INFO [RS:1;jenkins-hbase4:46217] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@288689f8{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/527ac157-3429-81c3-f99e-cf221451c37d/hadoop.log.dir/,STOPPED} 2023-07-18 02:15:30,399 INFO [RS:0;jenkins-hbase4:42149] server.AbstractConnector(383): Stopped ServerConnector@6ba4dce4{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 02:15:30,400 INFO [RS:0;jenkins-hbase4:42149] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-18 02:15:30,400 INFO [RS:2;jenkins-hbase4:32775] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-18 02:15:30,401 INFO [RS:0;jenkins-hbase4:42149] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@295166fc{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-18 02:15:30,402 INFO [RS:0;jenkins-hbase4:42149] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@75e5d650{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/527ac157-3429-81c3-f99e-cf221451c37d/hadoop.log.dir/,STOPPED} 2023-07-18 02:15:30,402 INFO [RS:3;jenkins-hbase4:42297] regionserver.HeapMemoryManager(220): Stopping 2023-07-18 02:15:30,403 INFO [RS:2;jenkins-hbase4:32775] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@736db136{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-18 02:15:30,404 INFO [RS:2;jenkins-hbase4:32775] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@51b99b82{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/527ac157-3429-81c3-f99e-cf221451c37d/hadoop.log.dir/,STOPPED} 2023-07-18 02:15:30,404 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-18 02:15:30,404 INFO [RS:1;jenkins-hbase4:46217] regionserver.HeapMemoryManager(220): Stopping 2023-07-18 02:15:30,404 INFO [RS:3;jenkins-hbase4:42297] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-18 02:15:30,405 INFO [RS:1;jenkins-hbase4:46217] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-18 02:15:30,405 INFO [RS:3;jenkins-hbase4:42297] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-18 02:15:30,405 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-18 02:15:30,405 INFO [RS:3;jenkins-hbase4:42297] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,42297,1689646528125 2023-07-18 02:15:30,405 INFO [RS:1;jenkins-hbase4:46217] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-18 02:15:30,405 DEBUG [RS:3;jenkins-hbase4:42297] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x003ea3a7 to 127.0.0.1:64106 2023-07-18 02:15:30,405 DEBUG [RS:3;jenkins-hbase4:42297] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 02:15:30,405 INFO [RS:3;jenkins-hbase4:42297] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,42297,1689646528125; all regions closed. 2023-07-18 02:15:30,405 INFO [RS:1;jenkins-hbase4:46217] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,46217,1689646526157 2023-07-18 02:15:30,405 DEBUG [RS:1;jenkins-hbase4:46217] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x561f31ec to 127.0.0.1:64106 2023-07-18 02:15:30,405 DEBUG [RS:1;jenkins-hbase4:46217] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 02:15:30,406 INFO [RS:1;jenkins-hbase4:46217] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,46217,1689646526157; all regions closed. 2023-07-18 02:15:30,405 INFO [RS:0;jenkins-hbase4:42149] regionserver.HeapMemoryManager(220): Stopping 2023-07-18 02:15:30,406 INFO [RS:0;jenkins-hbase4:42149] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-18 02:15:30,406 INFO [RS:0;jenkins-hbase4:42149] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-18 02:15:30,406 INFO [RS:0;jenkins-hbase4:42149] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,42149,1689646526006 2023-07-18 02:15:30,406 DEBUG [RS:0;jenkins-hbase4:42149] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x3a22d183 to 127.0.0.1:64106 2023-07-18 02:15:30,406 DEBUG [RS:0;jenkins-hbase4:42149] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 02:15:30,406 INFO [RS:0;jenkins-hbase4:42149] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-18 02:15:30,406 INFO [RS:0;jenkins-hbase4:42149] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-18 02:15:30,406 INFO [RS:0;jenkins-hbase4:42149] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-18 02:15:30,406 INFO [RS:0;jenkins-hbase4:42149] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-18 02:15:30,406 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-18 02:15:30,410 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-18 02:15:30,410 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-18 02:15:30,414 INFO [RS:2;jenkins-hbase4:32775] regionserver.HeapMemoryManager(220): Stopping 2023-07-18 02:15:30,414 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-18 02:15:30,414 INFO [RS:0;jenkins-hbase4:42149] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-07-18 02:15:30,414 DEBUG [RS:0;jenkins-hbase4:42149] regionserver.HRegionServer(1478): Online Regions={4ff67b2ac9c9087f40b9b252696553d5=hbase:rsgroup,,1689646527238.4ff67b2ac9c9087f40b9b252696553d5., 1588230740=hbase:meta,,1.1588230740} 2023-07-18 02:15:30,415 DEBUG [RS:0;jenkins-hbase4:42149] regionserver.HRegionServer(1504): Waiting on 1588230740, 4ff67b2ac9c9087f40b9b252696553d5 2023-07-18 02:15:30,415 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-18 02:15:30,415 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-18 02:15:30,415 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-18 02:15:30,415 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-18 02:15:30,415 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-18 02:15:30,415 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=4.51 KB heapSize=8.81 KB 2023-07-18 02:15:30,415 INFO [RS:2;jenkins-hbase4:32775] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-18 02:15:30,416 INFO [RS:2;jenkins-hbase4:32775] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-18 02:15:30,416 INFO [RS:2;jenkins-hbase4:32775] regionserver.HRegionServer(3305): Received CLOSE for 5d42ab326f55041590a03c94226111bd 2023-07-18 02:15:30,422 INFO [RS:2;jenkins-hbase4:32775] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,32775,1689646526307 2023-07-18 02:15:30,424 DEBUG [RS:2;jenkins-hbase4:32775] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x4edb461d to 127.0.0.1:64106 2023-07-18 02:15:30,426 DEBUG [RS:2;jenkins-hbase4:32775] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 02:15:30,426 INFO [RS:2;jenkins-hbase4:32775] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-18 02:15:30,426 DEBUG [RS:2;jenkins-hbase4:32775] regionserver.HRegionServer(1478): Online Regions={5d42ab326f55041590a03c94226111bd=hbase:namespace,,1689646527093.5d42ab326f55041590a03c94226111bd.} 2023-07-18 02:15:30,426 DEBUG [RS:2;jenkins-hbase4:32775] regionserver.HRegionServer(1504): Waiting on 5d42ab326f55041590a03c94226111bd 2023-07-18 02:15:30,428 DEBUG [RS:1;jenkins-hbase4:46217] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/oldWALs 2023-07-18 02:15:30,428 INFO [RS:1;jenkins-hbase4:46217] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C46217%2C1689646526157:(num 1689646526887) 2023-07-18 02:15:30,428 DEBUG [RS:1;jenkins-hbase4:46217] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 02:15:30,428 INFO [RS:1;jenkins-hbase4:46217] regionserver.LeaseManager(133): Closed leases 2023-07-18 02:15:30,428 DEBUG [RS:3;jenkins-hbase4:42297] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/oldWALs 2023-07-18 02:15:30,428 INFO [RS:1;jenkins-hbase4:46217] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-18 02:15:30,428 INFO [RS:3;jenkins-hbase4:42297] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C42297%2C1689646528125:(num 1689646528488) 2023-07-18 02:15:30,428 INFO [RS:1;jenkins-hbase4:46217] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-18 02:15:30,428 DEBUG [RS:3;jenkins-hbase4:42297] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 02:15:30,429 INFO [RS:1;jenkins-hbase4:46217] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-18 02:15:30,429 INFO [RS:3;jenkins-hbase4:42297] regionserver.LeaseManager(133): Closed leases 2023-07-18 02:15:30,429 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-18 02:15:30,429 INFO [RS:3;jenkins-hbase4:42297] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-18 02:15:30,429 INFO [RS:3;jenkins-hbase4:42297] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-18 02:15:30,429 INFO [RS:3;jenkins-hbase4:42297] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-18 02:15:30,429 INFO [RS:3;jenkins-hbase4:42297] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-18 02:15:30,429 INFO [RS:1;jenkins-hbase4:46217] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-18 02:15:30,431 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 5d42ab326f55041590a03c94226111bd, disabling compactions & flushes 2023-07-18 02:15:30,431 INFO [RS:3;jenkins-hbase4:42297] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:42297 2023-07-18 02:15:30,429 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-18 02:15:30,435 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689646527093.5d42ab326f55041590a03c94226111bd. 2023-07-18 02:15:30,435 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689646527093.5d42ab326f55041590a03c94226111bd. 2023-07-18 02:15:30,435 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689646527093.5d42ab326f55041590a03c94226111bd. after waiting 0 ms 2023-07-18 02:15:30,435 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689646527093.5d42ab326f55041590a03c94226111bd. 2023-07-18 02:15:30,434 INFO [RS:1;jenkins-hbase4:46217] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:46217 2023-07-18 02:15:30,435 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 5d42ab326f55041590a03c94226111bd 1/1 column families, dataSize=267 B heapSize=904 B 2023-07-18 02:15:30,436 DEBUG [Listener at localhost/42627-EventThread] zookeeper.ZKWatcher(600): regionserver:42297-0x10176367072000b, quorum=127.0.0.1:64106, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,42297,1689646528125 2023-07-18 02:15:30,436 DEBUG [Listener at localhost/42627-EventThread] zookeeper.ZKWatcher(600): regionserver:42297-0x10176367072000b, quorum=127.0.0.1:64106, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 02:15:30,436 DEBUG [Listener at localhost/42627-EventThread] zookeeper.ZKWatcher(600): regionserver:32775-0x101763670720003, quorum=127.0.0.1:64106, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,42297,1689646528125 2023-07-18 02:15:30,437 DEBUG [Listener at localhost/42627-EventThread] zookeeper.ZKWatcher(600): regionserver:32775-0x101763670720003, quorum=127.0.0.1:64106, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 02:15:30,437 DEBUG [Listener at localhost/42627-EventThread] zookeeper.ZKWatcher(600): regionserver:42149-0x101763670720001, quorum=127.0.0.1:64106, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,42297,1689646528125 2023-07-18 02:15:30,437 DEBUG [Listener at localhost/42627-EventThread] zookeeper.ZKWatcher(600): master:34701-0x101763670720000, quorum=127.0.0.1:64106, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 02:15:30,437 DEBUG [Listener at localhost/42627-EventThread] zookeeper.ZKWatcher(600): regionserver:42149-0x101763670720001, quorum=127.0.0.1:64106, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 02:15:30,437 DEBUG [Listener at localhost/42627-EventThread] zookeeper.ZKWatcher(600): regionserver:46217-0x101763670720002, quorum=127.0.0.1:64106, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,42297,1689646528125 2023-07-18 02:15:30,437 DEBUG [Listener at localhost/42627-EventThread] zookeeper.ZKWatcher(600): regionserver:46217-0x101763670720002, quorum=127.0.0.1:64106, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 02:15:30,438 DEBUG [Listener at localhost/42627-EventThread] zookeeper.ZKWatcher(600): regionserver:46217-0x101763670720002, quorum=127.0.0.1:64106, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,46217,1689646526157 2023-07-18 02:15:30,438 DEBUG [Listener at localhost/42627-EventThread] zookeeper.ZKWatcher(600): regionserver:32775-0x101763670720003, quorum=127.0.0.1:64106, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,46217,1689646526157 2023-07-18 02:15:30,438 DEBUG [Listener at localhost/42627-EventThread] zookeeper.ZKWatcher(600): regionserver:42149-0x101763670720001, quorum=127.0.0.1:64106, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,46217,1689646526157 2023-07-18 02:15:30,438 DEBUG [Listener at localhost/42627-EventThread] zookeeper.ZKWatcher(600): regionserver:42297-0x10176367072000b, quorum=127.0.0.1:64106, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,46217,1689646526157 2023-07-18 02:15:30,438 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,42297,1689646528125] 2023-07-18 02:15:30,438 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,42297,1689646528125; numProcessing=1 2023-07-18 02:15:30,440 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,42297,1689646528125 already deleted, retry=false 2023-07-18 02:15:30,440 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,42297,1689646528125 expired; onlineServers=3 2023-07-18 02:15:30,440 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,46217,1689646526157] 2023-07-18 02:15:30,440 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,46217,1689646526157; numProcessing=2 2023-07-18 02:15:30,451 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-18 02:15:30,452 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=6.43 KB at sequenceid=29 (bloomFilter=true), to=hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/data/hbase/rsgroup/4ff67b2ac9c9087f40b9b252696553d5/.tmp/m/e2ef3d138bcc458e918d08f163fc5503 2023-07-18 02:15:30,453 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-18 02:15:30,453 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-18 02:15:30,454 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=4.01 KB at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/data/hbase/meta/1588230740/.tmp/info/eb67f60f5d834a17b9213de2f481c956 2023-07-18 02:15:30,455 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-18 02:15:30,461 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for e2ef3d138bcc458e918d08f163fc5503 2023-07-18 02:15:30,461 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for eb67f60f5d834a17b9213de2f481c956 2023-07-18 02:15:30,463 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/data/hbase/rsgroup/4ff67b2ac9c9087f40b9b252696553d5/.tmp/m/e2ef3d138bcc458e918d08f163fc5503 as hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/data/hbase/rsgroup/4ff67b2ac9c9087f40b9b252696553d5/m/e2ef3d138bcc458e918d08f163fc5503 2023-07-18 02:15:30,463 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=267 B at sequenceid=9 (bloomFilter=true), to=hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/data/hbase/namespace/5d42ab326f55041590a03c94226111bd/.tmp/info/5acaa1406bfa445296c73684ad30a656 2023-07-18 02:15:30,468 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 5acaa1406bfa445296c73684ad30a656 2023-07-18 02:15:30,469 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for e2ef3d138bcc458e918d08f163fc5503 2023-07-18 02:15:30,469 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/data/hbase/rsgroup/4ff67b2ac9c9087f40b9b252696553d5/m/e2ef3d138bcc458e918d08f163fc5503, entries=12, sequenceid=29, filesize=5.4 K 2023-07-18 02:15:30,469 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/data/hbase/namespace/5d42ab326f55041590a03c94226111bd/.tmp/info/5acaa1406bfa445296c73684ad30a656 as hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/data/hbase/namespace/5d42ab326f55041590a03c94226111bd/info/5acaa1406bfa445296c73684ad30a656 2023-07-18 02:15:30,471 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~6.43 KB/6586, heapSize ~10.61 KB/10864, currentSize=0 B/0 for 4ff67b2ac9c9087f40b9b252696553d5 in 80ms, sequenceid=29, compaction requested=false 2023-07-18 02:15:30,484 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 5acaa1406bfa445296c73684ad30a656 2023-07-18 02:15:30,484 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/data/hbase/namespace/5d42ab326f55041590a03c94226111bd/info/5acaa1406bfa445296c73684ad30a656, entries=3, sequenceid=9, filesize=5.0 K 2023-07-18 02:15:30,485 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~267 B/267, heapSize ~888 B/888, currentSize=0 B/0 for 5d42ab326f55041590a03c94226111bd in 50ms, sequenceid=9, compaction requested=false 2023-07-18 02:15:30,490 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/data/hbase/rsgroup/4ff67b2ac9c9087f40b9b252696553d5/recovered.edits/32.seqid, newMaxSeqId=32, maxSeqId=1 2023-07-18 02:15:30,490 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-18 02:15:30,490 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689646527238.4ff67b2ac9c9087f40b9b252696553d5. 2023-07-18 02:15:30,490 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 4ff67b2ac9c9087f40b9b252696553d5: 2023-07-18 02:15:30,490 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689646527238.4ff67b2ac9c9087f40b9b252696553d5. 2023-07-18 02:15:30,496 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=82 B at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/data/hbase/meta/1588230740/.tmp/rep_barrier/96e350f3912541178dd17ebbfb785bd6 2023-07-18 02:15:30,502 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/data/hbase/namespace/5d42ab326f55041590a03c94226111bd/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=1 2023-07-18 02:15:30,503 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 96e350f3912541178dd17ebbfb785bd6 2023-07-18 02:15:30,503 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689646527093.5d42ab326f55041590a03c94226111bd. 2023-07-18 02:15:30,503 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 5d42ab326f55041590a03c94226111bd: 2023-07-18 02:15:30,503 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689646527093.5d42ab326f55041590a03c94226111bd. 2023-07-18 02:15:30,523 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=428 B at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/data/hbase/meta/1588230740/.tmp/table/c86944cfd7ec4057b6031db15801568d 2023-07-18 02:15:30,529 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for c86944cfd7ec4057b6031db15801568d 2023-07-18 02:15:30,530 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/data/hbase/meta/1588230740/.tmp/info/eb67f60f5d834a17b9213de2f481c956 as hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/data/hbase/meta/1588230740/info/eb67f60f5d834a17b9213de2f481c956 2023-07-18 02:15:30,536 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for eb67f60f5d834a17b9213de2f481c956 2023-07-18 02:15:30,536 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/data/hbase/meta/1588230740/info/eb67f60f5d834a17b9213de2f481c956, entries=22, sequenceid=26, filesize=7.3 K 2023-07-18 02:15:30,537 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/data/hbase/meta/1588230740/.tmp/rep_barrier/96e350f3912541178dd17ebbfb785bd6 as hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/data/hbase/meta/1588230740/rep_barrier/96e350f3912541178dd17ebbfb785bd6 2023-07-18 02:15:30,539 DEBUG [Listener at localhost/42627-EventThread] zookeeper.ZKWatcher(600): regionserver:42297-0x10176367072000b, quorum=127.0.0.1:64106, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 02:15:30,539 DEBUG [Listener at localhost/42627-EventThread] zookeeper.ZKWatcher(600): regionserver:42297-0x10176367072000b, quorum=127.0.0.1:64106, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 02:15:30,539 INFO [RS:3;jenkins-hbase4:42297] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,42297,1689646528125; zookeeper connection closed. 2023-07-18 02:15:30,539 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@7c769ca8] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@7c769ca8 2023-07-18 02:15:30,540 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,46217,1689646526157 already deleted, retry=false 2023-07-18 02:15:30,540 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,46217,1689646526157 expired; onlineServers=2 2023-07-18 02:15:30,543 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 96e350f3912541178dd17ebbfb785bd6 2023-07-18 02:15:30,543 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/data/hbase/meta/1588230740/rep_barrier/96e350f3912541178dd17ebbfb785bd6, entries=1, sequenceid=26, filesize=4.9 K 2023-07-18 02:15:30,544 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/data/hbase/meta/1588230740/.tmp/table/c86944cfd7ec4057b6031db15801568d as hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/data/hbase/meta/1588230740/table/c86944cfd7ec4057b6031db15801568d 2023-07-18 02:15:30,549 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for c86944cfd7ec4057b6031db15801568d 2023-07-18 02:15:30,550 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/data/hbase/meta/1588230740/table/c86944cfd7ec4057b6031db15801568d, entries=6, sequenceid=26, filesize=5.1 K 2023-07-18 02:15:30,550 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~4.51 KB/4614, heapSize ~8.77 KB/8976, currentSize=0 B/0 for 1588230740 in 135ms, sequenceid=26, compaction requested=false 2023-07-18 02:15:30,561 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/data/hbase/meta/1588230740/recovered.edits/29.seqid, newMaxSeqId=29, maxSeqId=1 2023-07-18 02:15:30,562 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-18 02:15:30,563 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-18 02:15:30,563 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-18 02:15:30,563 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-18 02:15:30,582 DEBUG [Listener at localhost/42627-EventThread] zookeeper.ZKWatcher(600): regionserver:46217-0x101763670720002, quorum=127.0.0.1:64106, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 02:15:30,582 INFO [RS:1;jenkins-hbase4:46217] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,46217,1689646526157; zookeeper connection closed. 2023-07-18 02:15:30,582 DEBUG [Listener at localhost/42627-EventThread] zookeeper.ZKWatcher(600): regionserver:46217-0x101763670720002, quorum=127.0.0.1:64106, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 02:15:30,583 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@2c5b7f27] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@2c5b7f27 2023-07-18 02:15:30,615 INFO [RS:0;jenkins-hbase4:42149] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,42149,1689646526006; all regions closed. 2023-07-18 02:15:30,622 DEBUG [RS:0;jenkins-hbase4:42149] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/oldWALs 2023-07-18 02:15:30,622 INFO [RS:0;jenkins-hbase4:42149] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C42149%2C1689646526006.meta:.meta(num 1689646527037) 2023-07-18 02:15:30,626 INFO [RS:2;jenkins-hbase4:32775] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,32775,1689646526307; all regions closed. 2023-07-18 02:15:30,628 DEBUG [RS:0;jenkins-hbase4:42149] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/oldWALs 2023-07-18 02:15:30,628 INFO [RS:0;jenkins-hbase4:42149] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C42149%2C1689646526006:(num 1689646526887) 2023-07-18 02:15:30,628 DEBUG [RS:0;jenkins-hbase4:42149] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 02:15:30,629 INFO [RS:0;jenkins-hbase4:42149] regionserver.LeaseManager(133): Closed leases 2023-07-18 02:15:30,629 INFO [RS:0;jenkins-hbase4:42149] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-18 02:15:30,630 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-18 02:15:30,631 INFO [RS:0;jenkins-hbase4:42149] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:42149 2023-07-18 02:15:30,633 DEBUG [Listener at localhost/42627-EventThread] zookeeper.ZKWatcher(600): regionserver:42149-0x101763670720001, quorum=127.0.0.1:64106, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,42149,1689646526006 2023-07-18 02:15:30,633 DEBUG [Listener at localhost/42627-EventThread] zookeeper.ZKWatcher(600): regionserver:32775-0x101763670720003, quorum=127.0.0.1:64106, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,42149,1689646526006 2023-07-18 02:15:30,633 DEBUG [Listener at localhost/42627-EventThread] zookeeper.ZKWatcher(600): master:34701-0x101763670720000, quorum=127.0.0.1:64106, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 02:15:30,634 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,42149,1689646526006] 2023-07-18 02:15:30,634 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,42149,1689646526006; numProcessing=3 2023-07-18 02:15:30,634 DEBUG [RS:2;jenkins-hbase4:32775] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/oldWALs 2023-07-18 02:15:30,635 INFO [RS:2;jenkins-hbase4:32775] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C32775%2C1689646526307:(num 1689646526896) 2023-07-18 02:15:30,635 DEBUG [RS:2;jenkins-hbase4:32775] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 02:15:30,635 INFO [RS:2;jenkins-hbase4:32775] regionserver.LeaseManager(133): Closed leases 2023-07-18 02:15:30,635 INFO [RS:2;jenkins-hbase4:32775] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-18 02:15:30,635 INFO [RS:2;jenkins-hbase4:32775] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-18 02:15:30,635 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-18 02:15:30,635 INFO [RS:2;jenkins-hbase4:32775] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-18 02:15:30,635 INFO [RS:2;jenkins-hbase4:32775] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-18 02:15:30,636 INFO [RS:2;jenkins-hbase4:32775] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:32775 2023-07-18 02:15:30,637 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,42149,1689646526006 already deleted, retry=false 2023-07-18 02:15:30,637 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,42149,1689646526006 expired; onlineServers=1 2023-07-18 02:15:30,735 DEBUG [Listener at localhost/42627-EventThread] zookeeper.ZKWatcher(600): regionserver:42149-0x101763670720001, quorum=127.0.0.1:64106, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 02:15:30,735 INFO [RS:0;jenkins-hbase4:42149] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,42149,1689646526006; zookeeper connection closed. 2023-07-18 02:15:30,735 DEBUG [Listener at localhost/42627-EventThread] zookeeper.ZKWatcher(600): regionserver:42149-0x101763670720001, quorum=127.0.0.1:64106, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 02:15:30,735 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@263541f4] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@263541f4 2023-07-18 02:15:30,736 DEBUG [Listener at localhost/42627-EventThread] zookeeper.ZKWatcher(600): regionserver:32775-0x101763670720003, quorum=127.0.0.1:64106, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,32775,1689646526307 2023-07-18 02:15:30,736 DEBUG [Listener at localhost/42627-EventThread] zookeeper.ZKWatcher(600): master:34701-0x101763670720000, quorum=127.0.0.1:64106, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 02:15:30,736 ERROR [Listener at localhost/42627-EventThread] zookeeper.ClientCnxn$EventThread(537): Error while calling watcher java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@55400723 rejected from java.util.concurrent.ThreadPoolExecutor@517f8953[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 9] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112) at java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:678) at org.apache.hadoop.hbase.zookeeper.ZKWatcher.process(ZKWatcher.java:603) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:535) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510) 2023-07-18 02:15:30,737 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,32775,1689646526307] 2023-07-18 02:15:30,737 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,32775,1689646526307; numProcessing=4 2023-07-18 02:15:30,738 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,32775,1689646526307 already deleted, retry=false 2023-07-18 02:15:30,738 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,32775,1689646526307 expired; onlineServers=0 2023-07-18 02:15:30,738 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,34701,1689646525826' ***** 2023-07-18 02:15:30,738 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-18 02:15:30,739 DEBUG [M:0;jenkins-hbase4:34701] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4c5b304d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-18 02:15:30,739 INFO [M:0;jenkins-hbase4:34701] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-18 02:15:30,742 INFO [M:0;jenkins-hbase4:34701] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@21d993e9{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-18 02:15:30,742 INFO [M:0;jenkins-hbase4:34701] server.AbstractConnector(383): Stopped ServerConnector@50e3498a{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 02:15:30,742 INFO [M:0;jenkins-hbase4:34701] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-18 02:15:30,742 DEBUG [Listener at localhost/42627-EventThread] zookeeper.ZKWatcher(600): master:34701-0x101763670720000, quorum=127.0.0.1:64106, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-18 02:15:30,743 DEBUG [Listener at localhost/42627-EventThread] zookeeper.ZKWatcher(600): master:34701-0x101763670720000, quorum=127.0.0.1:64106, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 02:15:30,743 INFO [M:0;jenkins-hbase4:34701] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2ce8454f{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-18 02:15:30,744 INFO [M:0;jenkins-hbase4:34701] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@558dcba1{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/527ac157-3429-81c3-f99e-cf221451c37d/hadoop.log.dir/,STOPPED} 2023-07-18 02:15:30,744 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:34701-0x101763670720000, quorum=127.0.0.1:64106, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-18 02:15:30,744 INFO [M:0;jenkins-hbase4:34701] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,34701,1689646525826 2023-07-18 02:15:30,744 INFO [M:0;jenkins-hbase4:34701] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,34701,1689646525826; all regions closed. 2023-07-18 02:15:30,744 DEBUG [M:0;jenkins-hbase4:34701] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 02:15:30,744 INFO [M:0;jenkins-hbase4:34701] master.HMaster(1491): Stopping master jetty server 2023-07-18 02:15:30,745 INFO [M:0;jenkins-hbase4:34701] server.AbstractConnector(383): Stopped ServerConnector@26b927f9{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 02:15:30,745 DEBUG [M:0;jenkins-hbase4:34701] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-18 02:15:30,745 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-18 02:15:30,745 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689646526631] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689646526631,5,FailOnTimeoutGroup] 2023-07-18 02:15:30,745 DEBUG [M:0;jenkins-hbase4:34701] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-18 02:15:30,745 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689646526638] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689646526638,5,FailOnTimeoutGroup] 2023-07-18 02:15:30,745 INFO [M:0;jenkins-hbase4:34701] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-18 02:15:30,745 INFO [M:0;jenkins-hbase4:34701] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-18 02:15:30,746 INFO [M:0;jenkins-hbase4:34701] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-07-18 02:15:30,746 DEBUG [M:0;jenkins-hbase4:34701] master.HMaster(1512): Stopping service threads 2023-07-18 02:15:30,746 INFO [M:0;jenkins-hbase4:34701] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-18 02:15:30,746 ERROR [M:0;jenkins-hbase4:34701] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-18 02:15:30,746 INFO [M:0;jenkins-hbase4:34701] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-18 02:15:30,746 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-18 02:15:30,746 DEBUG [M:0;jenkins-hbase4:34701] zookeeper.ZKUtil(398): master:34701-0x101763670720000, quorum=127.0.0.1:64106, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-18 02:15:30,746 WARN [M:0;jenkins-hbase4:34701] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-18 02:15:30,746 INFO [M:0;jenkins-hbase4:34701] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-18 02:15:30,746 INFO [M:0;jenkins-hbase4:34701] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-18 02:15:30,746 DEBUG [M:0;jenkins-hbase4:34701] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-18 02:15:30,747 INFO [M:0;jenkins-hbase4:34701] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 02:15:30,747 DEBUG [M:0;jenkins-hbase4:34701] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 02:15:30,747 DEBUG [M:0;jenkins-hbase4:34701] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-18 02:15:30,747 DEBUG [M:0;jenkins-hbase4:34701] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 02:15:30,747 INFO [M:0;jenkins-hbase4:34701] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=76.21 KB heapSize=90.66 KB 2023-07-18 02:15:30,758 INFO [M:0;jenkins-hbase4:34701] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=76.21 KB at sequenceid=175 (bloomFilter=true), to=hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/da55905c45854bbaaaf167ea1d522c01 2023-07-18 02:15:30,763 DEBUG [M:0;jenkins-hbase4:34701] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/da55905c45854bbaaaf167ea1d522c01 as hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/da55905c45854bbaaaf167ea1d522c01 2023-07-18 02:15:30,767 INFO [M:0;jenkins-hbase4:34701] regionserver.HStore(1080): Added hdfs://localhost:41331/user/jenkins/test-data/669e7a4f-4e86-19f1-63d9-404d6ec8c940/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/da55905c45854bbaaaf167ea1d522c01, entries=22, sequenceid=175, filesize=11.1 K 2023-07-18 02:15:30,768 INFO [M:0;jenkins-hbase4:34701] regionserver.HRegion(2948): Finished flush of dataSize ~76.21 KB/78041, heapSize ~90.64 KB/92816, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 21ms, sequenceid=175, compaction requested=false 2023-07-18 02:15:30,770 INFO [M:0;jenkins-hbase4:34701] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 02:15:30,770 DEBUG [M:0;jenkins-hbase4:34701] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-18 02:15:30,773 INFO [M:0;jenkins-hbase4:34701] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-18 02:15:30,773 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-18 02:15:30,773 INFO [M:0;jenkins-hbase4:34701] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:34701 2023-07-18 02:15:30,775 DEBUG [M:0;jenkins-hbase4:34701] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,34701,1689646525826 already deleted, retry=false 2023-07-18 02:15:31,184 DEBUG [Listener at localhost/42627-EventThread] zookeeper.ZKWatcher(600): master:34701-0x101763670720000, quorum=127.0.0.1:64106, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 02:15:31,184 INFO [M:0;jenkins-hbase4:34701] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,34701,1689646525826; zookeeper connection closed. 2023-07-18 02:15:31,184 DEBUG [Listener at localhost/42627-EventThread] zookeeper.ZKWatcher(600): master:34701-0x101763670720000, quorum=127.0.0.1:64106, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 02:15:31,284 DEBUG [Listener at localhost/42627-EventThread] zookeeper.ZKWatcher(600): regionserver:32775-0x101763670720003, quorum=127.0.0.1:64106, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 02:15:31,284 INFO [RS:2;jenkins-hbase4:32775] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,32775,1689646526307; zookeeper connection closed. 2023-07-18 02:15:31,284 DEBUG [Listener at localhost/42627-EventThread] zookeeper.ZKWatcher(600): regionserver:32775-0x101763670720003, quorum=127.0.0.1:64106, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 02:15:31,285 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@3f04744d] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@3f04744d 2023-07-18 02:15:31,285 INFO [Listener at localhost/42627] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 4 regionserver(s) complete 2023-07-18 02:15:31,285 WARN [Listener at localhost/42627] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-18 02:15:31,288 INFO [Listener at localhost/42627] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-18 02:15:31,393 WARN [BP-681486909-172.31.14.131-1689646525078 heartbeating to localhost/127.0.0.1:41331] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-18 02:15:31,393 WARN [BP-681486909-172.31.14.131-1689646525078 heartbeating to localhost/127.0.0.1:41331] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-681486909-172.31.14.131-1689646525078 (Datanode Uuid 8a7e2789-5c6f-422c-abaa-47faba236b2c) service to localhost/127.0.0.1:41331 2023-07-18 02:15:31,394 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/527ac157-3429-81c3-f99e-cf221451c37d/cluster_80d12f21-14ad-01d3-6749-7b372d00c374/dfs/data/data5/current/BP-681486909-172.31.14.131-1689646525078] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 02:15:31,395 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/527ac157-3429-81c3-f99e-cf221451c37d/cluster_80d12f21-14ad-01d3-6749-7b372d00c374/dfs/data/data6/current/BP-681486909-172.31.14.131-1689646525078] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 02:15:31,396 WARN [Listener at localhost/42627] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-18 02:15:31,400 INFO [Listener at localhost/42627] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-18 02:15:31,505 WARN [BP-681486909-172.31.14.131-1689646525078 heartbeating to localhost/127.0.0.1:41331] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-18 02:15:31,505 WARN [BP-681486909-172.31.14.131-1689646525078 heartbeating to localhost/127.0.0.1:41331] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-681486909-172.31.14.131-1689646525078 (Datanode Uuid cb1d07e8-ef7e-4904-8ccb-8f29344c0d1c) service to localhost/127.0.0.1:41331 2023-07-18 02:15:31,506 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/527ac157-3429-81c3-f99e-cf221451c37d/cluster_80d12f21-14ad-01d3-6749-7b372d00c374/dfs/data/data3/current/BP-681486909-172.31.14.131-1689646525078] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 02:15:31,506 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/527ac157-3429-81c3-f99e-cf221451c37d/cluster_80d12f21-14ad-01d3-6749-7b372d00c374/dfs/data/data4/current/BP-681486909-172.31.14.131-1689646525078] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 02:15:31,507 WARN [Listener at localhost/42627] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-18 02:15:31,508 WARN [BP-681486909-172.31.14.131-1689646525078 heartbeating to localhost/127.0.0.1:41331] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-681486909-172.31.14.131-1689646525078 (Datanode Uuid 5f6b1722-f89b-4e77-a3a2-e2587aacc9d2) service to localhost/127.0.0.1:41331 2023-07-18 02:15:31,509 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/527ac157-3429-81c3-f99e-cf221451c37d/cluster_80d12f21-14ad-01d3-6749-7b372d00c374/dfs/data/data1/current/BP-681486909-172.31.14.131-1689646525078] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 02:15:31,510 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/527ac157-3429-81c3-f99e-cf221451c37d/cluster_80d12f21-14ad-01d3-6749-7b372d00c374/dfs/data/data2/current/BP-681486909-172.31.14.131-1689646525078] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 02:15:31,510 INFO [Listener at localhost/42627] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-18 02:15:31,623 INFO [Listener at localhost/42627] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-18 02:15:31,739 INFO [Listener at localhost/42627] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-18 02:15:31,766 INFO [Listener at localhost/42627] hbase.HBaseTestingUtility(1293): Minicluster is down