2023-07-13 03:16:05,221 DEBUG [main] hbase.HBaseTestingUtility(342): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9a468761-9605-9fc9-5826-02909870e5fb 2023-07-13 03:16:05,238 INFO [main] hbase.HBaseClassTestRule(94): Test class org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1 timeout: 13 mins 2023-07-13 03:16:05,252 INFO [Time-limited test] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-13 03:16:05,252 INFO [Time-limited test] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9a468761-9605-9fc9-5826-02909870e5fb/cluster_5365af11-0016-b950-934d-d6cdde7e87b7, deleteOnExit=true 2023-07-13 03:16:05,252 INFO [Time-limited test] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-13 03:16:05,253 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9a468761-9605-9fc9-5826-02909870e5fb/test.cache.data in system properties and HBase conf 2023-07-13 03:16:05,254 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9a468761-9605-9fc9-5826-02909870e5fb/hadoop.tmp.dir in system properties and HBase conf 2023-07-13 03:16:05,254 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9a468761-9605-9fc9-5826-02909870e5fb/hadoop.log.dir in system properties and HBase conf 2023-07-13 03:16:05,255 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9a468761-9605-9fc9-5826-02909870e5fb/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-13 03:16:05,255 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9a468761-9605-9fc9-5826-02909870e5fb/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-13 03:16:05,255 INFO [Time-limited test] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-13 03:16:05,359 WARN [Time-limited test] util.NativeCodeLoader(62): Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2023-07-13 03:16:05,720 DEBUG [Time-limited test] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-13 03:16:05,724 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9a468761-9605-9fc9-5826-02909870e5fb/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-13 03:16:05,724 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9a468761-9605-9fc9-5826-02909870e5fb/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-13 03:16:05,725 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9a468761-9605-9fc9-5826-02909870e5fb/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-13 03:16:05,725 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9a468761-9605-9fc9-5826-02909870e5fb/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-13 03:16:05,725 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9a468761-9605-9fc9-5826-02909870e5fb/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-13 03:16:05,726 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9a468761-9605-9fc9-5826-02909870e5fb/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-13 03:16:05,726 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9a468761-9605-9fc9-5826-02909870e5fb/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-13 03:16:05,727 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9a468761-9605-9fc9-5826-02909870e5fb/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-13 03:16:05,727 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9a468761-9605-9fc9-5826-02909870e5fb/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-13 03:16:05,728 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9a468761-9605-9fc9-5826-02909870e5fb/nfs.dump.dir in system properties and HBase conf 2023-07-13 03:16:05,728 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9a468761-9605-9fc9-5826-02909870e5fb/java.io.tmpdir in system properties and HBase conf 2023-07-13 03:16:05,728 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9a468761-9605-9fc9-5826-02909870e5fb/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-13 03:16:05,729 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9a468761-9605-9fc9-5826-02909870e5fb/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-13 03:16:05,729 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9a468761-9605-9fc9-5826-02909870e5fb/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-13 03:16:06,222 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-13 03:16:06,227 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-13 03:16:06,531 WARN [Time-limited test] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties 2023-07-13 03:16:06,661 INFO [Time-limited test] log.Slf4jLog(67): Logging to org.slf4j.impl.Reload4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2023-07-13 03:16:06,674 WARN [Time-limited test] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-13 03:16:06,707 INFO [Time-limited test] log.Slf4jLog(67): jetty-6.1.26 2023-07-13 03:16:06,749 INFO [Time-limited test] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9a468761-9605-9fc9-5826-02909870e5fb/java.io.tmpdir/Jetty_localhost_localdomain_45515_hdfs____.vdmo1a/webapp 2023-07-13 03:16:06,886 INFO [Time-limited test] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:45515 2023-07-13 03:16:06,895 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-13 03:16:06,896 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-13 03:16:07,401 WARN [Listener at localhost.localdomain/34135] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-13 03:16:07,490 WARN [Listener at localhost.localdomain/34135] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-13 03:16:07,513 WARN [Listener at localhost.localdomain/34135] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-13 03:16:07,521 INFO [Listener at localhost.localdomain/34135] log.Slf4jLog(67): jetty-6.1.26 2023-07-13 03:16:07,527 INFO [Listener at localhost.localdomain/34135] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9a468761-9605-9fc9-5826-02909870e5fb/java.io.tmpdir/Jetty_localhost_44777_datanode____.9mscle/webapp 2023-07-13 03:16:07,638 INFO [Listener at localhost.localdomain/34135] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44777 2023-07-13 03:16:08,075 WARN [Listener at localhost.localdomain/40287] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-13 03:16:08,099 WARN [Listener at localhost.localdomain/40287] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-13 03:16:08,104 WARN [Listener at localhost.localdomain/40287] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-13 03:16:08,108 INFO [Listener at localhost.localdomain/40287] log.Slf4jLog(67): jetty-6.1.26 2023-07-13 03:16:08,115 INFO [Listener at localhost.localdomain/40287] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9a468761-9605-9fc9-5826-02909870e5fb/java.io.tmpdir/Jetty_localhost_42585_datanode____tri0t7/webapp 2023-07-13 03:16:08,223 INFO [Listener at localhost.localdomain/40287] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:42585 2023-07-13 03:16:08,242 WARN [Listener at localhost.localdomain/36209] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-13 03:16:08,292 WARN [Listener at localhost.localdomain/36209] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-13 03:16:08,294 WARN [Listener at localhost.localdomain/36209] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-13 03:16:08,296 INFO [Listener at localhost.localdomain/36209] log.Slf4jLog(67): jetty-6.1.26 2023-07-13 03:16:08,302 INFO [Listener at localhost.localdomain/36209] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9a468761-9605-9fc9-5826-02909870e5fb/java.io.tmpdir/Jetty_localhost_40785_datanode____phf527/webapp 2023-07-13 03:16:08,407 INFO [Listener at localhost.localdomain/36209] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:40785 2023-07-13 03:16:08,426 WARN [Listener at localhost.localdomain/36261] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-13 03:16:08,668 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x5da49c1dc22c72ba: Processing first storage report for DS-f2641e55-6772-43f9-9084-b6bc41af5cda from datanode f14afc60-6791-474c-b55e-56b1db75c49b 2023-07-13 03:16:08,669 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x5da49c1dc22c72ba: from storage DS-f2641e55-6772-43f9-9084-b6bc41af5cda node DatanodeRegistration(127.0.0.1:43409, datanodeUuid=f14afc60-6791-474c-b55e-56b1db75c49b, infoPort=41273, infoSecurePort=0, ipcPort=36261, storageInfo=lv=-57;cid=testClusterID;nsid=1161323377;c=1689218166310), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-13 03:16:08,670 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x27c87f03354c34e6: Processing first storage report for DS-7325d1d6-f32c-4e9f-9d47-b89ecc0dcb96 from datanode 4609a206-ce53-4981-8365-9d39736f9c95 2023-07-13 03:16:08,670 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x27c87f03354c34e6: from storage DS-7325d1d6-f32c-4e9f-9d47-b89ecc0dcb96 node DatanodeRegistration(127.0.0.1:37299, datanodeUuid=4609a206-ce53-4981-8365-9d39736f9c95, infoPort=40961, infoSecurePort=0, ipcPort=40287, storageInfo=lv=-57;cid=testClusterID;nsid=1161323377;c=1689218166310), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-13 03:16:08,670 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x358fd4b616b6b808: Processing first storage report for DS-72306ca7-9fea-43e7-ac6c-a3e6f88d5ecf from datanode 2fbab921-4a8f-41a9-bccf-ecb06e44ce10 2023-07-13 03:16:08,670 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x358fd4b616b6b808: from storage DS-72306ca7-9fea-43e7-ac6c-a3e6f88d5ecf node DatanodeRegistration(127.0.0.1:43963, datanodeUuid=2fbab921-4a8f-41a9-bccf-ecb06e44ce10, infoPort=39325, infoSecurePort=0, ipcPort=36209, storageInfo=lv=-57;cid=testClusterID;nsid=1161323377;c=1689218166310), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-13 03:16:08,671 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x5da49c1dc22c72ba: Processing first storage report for DS-08233d9e-f943-48dd-b63f-7c5373f2eb87 from datanode f14afc60-6791-474c-b55e-56b1db75c49b 2023-07-13 03:16:08,671 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x5da49c1dc22c72ba: from storage DS-08233d9e-f943-48dd-b63f-7c5373f2eb87 node DatanodeRegistration(127.0.0.1:43409, datanodeUuid=f14afc60-6791-474c-b55e-56b1db75c49b, infoPort=41273, infoSecurePort=0, ipcPort=36261, storageInfo=lv=-57;cid=testClusterID;nsid=1161323377;c=1689218166310), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-13 03:16:08,671 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x27c87f03354c34e6: Processing first storage report for DS-14d99cbc-d774-4adb-8962-777b11982c88 from datanode 4609a206-ce53-4981-8365-9d39736f9c95 2023-07-13 03:16:08,671 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x27c87f03354c34e6: from storage DS-14d99cbc-d774-4adb-8962-777b11982c88 node DatanodeRegistration(127.0.0.1:37299, datanodeUuid=4609a206-ce53-4981-8365-9d39736f9c95, infoPort=40961, infoSecurePort=0, ipcPort=40287, storageInfo=lv=-57;cid=testClusterID;nsid=1161323377;c=1689218166310), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-13 03:16:08,672 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x358fd4b616b6b808: Processing first storage report for DS-cd401d47-17bc-4e27-94a3-ef246ea5e382 from datanode 2fbab921-4a8f-41a9-bccf-ecb06e44ce10 2023-07-13 03:16:08,672 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x358fd4b616b6b808: from storage DS-cd401d47-17bc-4e27-94a3-ef246ea5e382 node DatanodeRegistration(127.0.0.1:43963, datanodeUuid=2fbab921-4a8f-41a9-bccf-ecb06e44ce10, infoPort=39325, infoSecurePort=0, ipcPort=36209, storageInfo=lv=-57;cid=testClusterID;nsid=1161323377;c=1689218166310), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-13 03:16:08,906 DEBUG [Listener at localhost.localdomain/36261] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9a468761-9605-9fc9-5826-02909870e5fb 2023-07-13 03:16:09,013 INFO [Listener at localhost.localdomain/36261] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9a468761-9605-9fc9-5826-02909870e5fb/cluster_5365af11-0016-b950-934d-d6cdde7e87b7/zookeeper_0, clientPort=56998, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9a468761-9605-9fc9-5826-02909870e5fb/cluster_5365af11-0016-b950-934d-d6cdde7e87b7/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9a468761-9605-9fc9-5826-02909870e5fb/cluster_5365af11-0016-b950-934d-d6cdde7e87b7/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-13 03:16:09,029 INFO [Listener at localhost.localdomain/36261] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=56998 2023-07-13 03:16:09,039 INFO [Listener at localhost.localdomain/36261] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 03:16:09,042 INFO [Listener at localhost.localdomain/36261] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 03:16:09,722 INFO [Listener at localhost.localdomain/36261] util.FSUtils(471): Created version file at hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692 with version=8 2023-07-13 03:16:09,723 INFO [Listener at localhost.localdomain/36261] hbase.HBaseTestingUtility(1406): Setting hbase.fs.tmp.dir to hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/hbase-staging 2023-07-13 03:16:09,736 DEBUG [Listener at localhost.localdomain/36261] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-13 03:16:09,736 DEBUG [Listener at localhost.localdomain/36261] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-13 03:16:09,737 DEBUG [Listener at localhost.localdomain/36261] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-13 03:16:09,737 DEBUG [Listener at localhost.localdomain/36261] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-13 03:16:10,124 INFO [Listener at localhost.localdomain/36261] metrics.MetricRegistriesLoader(60): Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl 2023-07-13 03:16:10,756 INFO [Listener at localhost.localdomain/36261] client.ConnectionUtils(127): master/jenkins-hbase20:0 server-side Connection retries=45 2023-07-13 03:16:10,805 INFO [Listener at localhost.localdomain/36261] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 03:16:10,806 INFO [Listener at localhost.localdomain/36261] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-13 03:16:10,806 INFO [Listener at localhost.localdomain/36261] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-13 03:16:10,806 INFO [Listener at localhost.localdomain/36261] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 03:16:10,807 INFO [Listener at localhost.localdomain/36261] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-13 03:16:10,999 INFO [Listener at localhost.localdomain/36261] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-13 03:16:11,085 DEBUG [Listener at localhost.localdomain/36261] util.ClassSize(228): Using Unsafe to estimate memory layout 2023-07-13 03:16:11,201 INFO [Listener at localhost.localdomain/36261] ipc.NettyRpcServer(120): Bind to /148.251.75.209:33491 2023-07-13 03:16:11,217 INFO [Listener at localhost.localdomain/36261] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 03:16:11,219 INFO [Listener at localhost.localdomain/36261] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 03:16:11,250 INFO [Listener at localhost.localdomain/36261] zookeeper.RecoverableZooKeeper(93): Process identifier=master:33491 connecting to ZooKeeper ensemble=127.0.0.1:56998 2023-07-13 03:16:11,303 DEBUG [Listener at localhost.localdomain/36261-EventThread] zookeeper.ZKWatcher(600): master:334910x0, quorum=127.0.0.1:56998, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-13 03:16:11,309 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:33491-0x1008454350d0000 connected 2023-07-13 03:16:11,363 DEBUG [Listener at localhost.localdomain/36261] zookeeper.ZKUtil(164): master:33491-0x1008454350d0000, quorum=127.0.0.1:56998, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-13 03:16:11,364 DEBUG [Listener at localhost.localdomain/36261] zookeeper.ZKUtil(164): master:33491-0x1008454350d0000, quorum=127.0.0.1:56998, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 03:16:11,368 DEBUG [Listener at localhost.localdomain/36261] zookeeper.ZKUtil(164): master:33491-0x1008454350d0000, quorum=127.0.0.1:56998, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-13 03:16:11,383 DEBUG [Listener at localhost.localdomain/36261] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=33491 2023-07-13 03:16:11,385 DEBUG [Listener at localhost.localdomain/36261] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=33491 2023-07-13 03:16:11,387 DEBUG [Listener at localhost.localdomain/36261] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=33491 2023-07-13 03:16:11,391 DEBUG [Listener at localhost.localdomain/36261] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=33491 2023-07-13 03:16:11,391 DEBUG [Listener at localhost.localdomain/36261] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=33491 2023-07-13 03:16:11,434 INFO [Listener at localhost.localdomain/36261] log.Log(170): Logging initialized @6966ms to org.apache.hbase.thirdparty.org.eclipse.jetty.util.log.Slf4jLog 2023-07-13 03:16:11,586 INFO [Listener at localhost.localdomain/36261] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-13 03:16:11,587 INFO [Listener at localhost.localdomain/36261] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-13 03:16:11,589 INFO [Listener at localhost.localdomain/36261] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-13 03:16:11,591 INFO [Listener at localhost.localdomain/36261] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-13 03:16:11,592 INFO [Listener at localhost.localdomain/36261] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-13 03:16:11,592 INFO [Listener at localhost.localdomain/36261] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-13 03:16:11,597 INFO [Listener at localhost.localdomain/36261] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-13 03:16:11,676 INFO [Listener at localhost.localdomain/36261] http.HttpServer(1146): Jetty bound to port 33253 2023-07-13 03:16:11,679 INFO [Listener at localhost.localdomain/36261] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-13 03:16:11,727 INFO [Listener at localhost.localdomain/36261] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 03:16:11,734 INFO [Listener at localhost.localdomain/36261] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@37734faf{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9a468761-9605-9fc9-5826-02909870e5fb/hadoop.log.dir/,AVAILABLE} 2023-07-13 03:16:11,736 INFO [Listener at localhost.localdomain/36261] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 03:16:11,738 INFO [Listener at localhost.localdomain/36261] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@76f839dd{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-13 03:16:11,996 INFO [Listener at localhost.localdomain/36261] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-13 03:16:12,013 INFO [Listener at localhost.localdomain/36261] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-13 03:16:12,013 INFO [Listener at localhost.localdomain/36261] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-13 03:16:12,016 INFO [Listener at localhost.localdomain/36261] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-13 03:16:12,026 INFO [Listener at localhost.localdomain/36261] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 03:16:12,060 INFO [Listener at localhost.localdomain/36261] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@51bc735f{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9a468761-9605-9fc9-5826-02909870e5fb/java.io.tmpdir/jetty-0_0_0_0-33253-hbase-server-2_4_18-SNAPSHOT_jar-_-any-3220257022585147565/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-13 03:16:12,074 INFO [Listener at localhost.localdomain/36261] server.AbstractConnector(333): Started ServerConnector@1d5a3f5b{HTTP/1.1, (http/1.1)}{0.0.0.0:33253} 2023-07-13 03:16:12,075 INFO [Listener at localhost.localdomain/36261] server.Server(415): Started @7607ms 2023-07-13 03:16:12,079 INFO [Listener at localhost.localdomain/36261] master.HMaster(444): hbase.rootdir=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692, hbase.cluster.distributed=false 2023-07-13 03:16:12,184 INFO [Listener at localhost.localdomain/36261] client.ConnectionUtils(127): regionserver/jenkins-hbase20:0 server-side Connection retries=45 2023-07-13 03:16:12,184 INFO [Listener at localhost.localdomain/36261] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 03:16:12,184 INFO [Listener at localhost.localdomain/36261] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-13 03:16:12,184 INFO [Listener at localhost.localdomain/36261] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-13 03:16:12,185 INFO [Listener at localhost.localdomain/36261] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 03:16:12,185 INFO [Listener at localhost.localdomain/36261] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-13 03:16:12,192 INFO [Listener at localhost.localdomain/36261] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-13 03:16:12,195 INFO [Listener at localhost.localdomain/36261] ipc.NettyRpcServer(120): Bind to /148.251.75.209:37181 2023-07-13 03:16:12,198 INFO [Listener at localhost.localdomain/36261] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-13 03:16:12,218 DEBUG [Listener at localhost.localdomain/36261] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-13 03:16:12,220 INFO [Listener at localhost.localdomain/36261] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 03:16:12,226 INFO [Listener at localhost.localdomain/36261] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 03:16:12,229 INFO [Listener at localhost.localdomain/36261] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:37181 connecting to ZooKeeper ensemble=127.0.0.1:56998 2023-07-13 03:16:12,238 DEBUG [Listener at localhost.localdomain/36261-EventThread] zookeeper.ZKWatcher(600): regionserver:371810x0, quorum=127.0.0.1:56998, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-13 03:16:12,240 DEBUG [Listener at localhost.localdomain/36261] zookeeper.ZKUtil(164): regionserver:371810x0, quorum=127.0.0.1:56998, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-13 03:16:12,242 DEBUG [Listener at localhost.localdomain/36261] zookeeper.ZKUtil(164): regionserver:371810x0, quorum=127.0.0.1:56998, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 03:16:12,243 DEBUG [Listener at localhost.localdomain/36261] zookeeper.ZKUtil(164): regionserver:371810x0, quorum=127.0.0.1:56998, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-13 03:16:12,250 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:37181-0x1008454350d0001 connected 2023-07-13 03:16:12,251 DEBUG [Listener at localhost.localdomain/36261] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=37181 2023-07-13 03:16:12,251 DEBUG [Listener at localhost.localdomain/36261] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=37181 2023-07-13 03:16:12,253 DEBUG [Listener at localhost.localdomain/36261] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=37181 2023-07-13 03:16:12,254 DEBUG [Listener at localhost.localdomain/36261] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=37181 2023-07-13 03:16:12,254 DEBUG [Listener at localhost.localdomain/36261] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=37181 2023-07-13 03:16:12,258 INFO [Listener at localhost.localdomain/36261] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-13 03:16:12,258 INFO [Listener at localhost.localdomain/36261] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-13 03:16:12,259 INFO [Listener at localhost.localdomain/36261] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-13 03:16:12,260 INFO [Listener at localhost.localdomain/36261] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-13 03:16:12,261 INFO [Listener at localhost.localdomain/36261] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-13 03:16:12,261 INFO [Listener at localhost.localdomain/36261] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-13 03:16:12,261 INFO [Listener at localhost.localdomain/36261] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-13 03:16:12,264 INFO [Listener at localhost.localdomain/36261] http.HttpServer(1146): Jetty bound to port 34513 2023-07-13 03:16:12,264 INFO [Listener at localhost.localdomain/36261] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-13 03:16:12,283 INFO [Listener at localhost.localdomain/36261] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 03:16:12,283 INFO [Listener at localhost.localdomain/36261] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2501c389{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9a468761-9605-9fc9-5826-02909870e5fb/hadoop.log.dir/,AVAILABLE} 2023-07-13 03:16:12,284 INFO [Listener at localhost.localdomain/36261] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 03:16:12,284 INFO [Listener at localhost.localdomain/36261] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@3f7c5c45{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-13 03:16:12,400 INFO [Listener at localhost.localdomain/36261] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-13 03:16:12,402 INFO [Listener at localhost.localdomain/36261] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-13 03:16:12,403 INFO [Listener at localhost.localdomain/36261] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-13 03:16:12,403 INFO [Listener at localhost.localdomain/36261] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-13 03:16:12,408 INFO [Listener at localhost.localdomain/36261] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 03:16:12,419 INFO [Listener at localhost.localdomain/36261] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@4bb093d2{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9a468761-9605-9fc9-5826-02909870e5fb/java.io.tmpdir/jetty-0_0_0_0-34513-hbase-server-2_4_18-SNAPSHOT_jar-_-any-1918744921871566897/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-13 03:16:12,428 INFO [Listener at localhost.localdomain/36261] server.AbstractConnector(333): Started ServerConnector@1f3aef2c{HTTP/1.1, (http/1.1)}{0.0.0.0:34513} 2023-07-13 03:16:12,428 INFO [Listener at localhost.localdomain/36261] server.Server(415): Started @7960ms 2023-07-13 03:16:12,446 INFO [Listener at localhost.localdomain/36261] client.ConnectionUtils(127): regionserver/jenkins-hbase20:0 server-side Connection retries=45 2023-07-13 03:16:12,446 INFO [Listener at localhost.localdomain/36261] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 03:16:12,446 INFO [Listener at localhost.localdomain/36261] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-13 03:16:12,450 INFO [Listener at localhost.localdomain/36261] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-13 03:16:12,450 INFO [Listener at localhost.localdomain/36261] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 03:16:12,450 INFO [Listener at localhost.localdomain/36261] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-13 03:16:12,450 INFO [Listener at localhost.localdomain/36261] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-13 03:16:12,457 INFO [Listener at localhost.localdomain/36261] ipc.NettyRpcServer(120): Bind to /148.251.75.209:44171 2023-07-13 03:16:12,458 INFO [Listener at localhost.localdomain/36261] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-13 03:16:12,485 DEBUG [Listener at localhost.localdomain/36261] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-13 03:16:12,486 INFO [Listener at localhost.localdomain/36261] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 03:16:12,488 INFO [Listener at localhost.localdomain/36261] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 03:16:12,490 INFO [Listener at localhost.localdomain/36261] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:44171 connecting to ZooKeeper ensemble=127.0.0.1:56998 2023-07-13 03:16:12,512 DEBUG [Listener at localhost.localdomain/36261-EventThread] zookeeper.ZKWatcher(600): regionserver:441710x0, quorum=127.0.0.1:56998, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-13 03:16:12,515 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:44171-0x1008454350d0002 connected 2023-07-13 03:16:12,515 DEBUG [Listener at localhost.localdomain/36261] zookeeper.ZKUtil(164): regionserver:44171-0x1008454350d0002, quorum=127.0.0.1:56998, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-13 03:16:12,523 DEBUG [Listener at localhost.localdomain/36261] zookeeper.ZKUtil(164): regionserver:44171-0x1008454350d0002, quorum=127.0.0.1:56998, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 03:16:12,524 DEBUG [Listener at localhost.localdomain/36261] zookeeper.ZKUtil(164): regionserver:44171-0x1008454350d0002, quorum=127.0.0.1:56998, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-13 03:16:12,534 DEBUG [Listener at localhost.localdomain/36261] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=44171 2023-07-13 03:16:12,538 DEBUG [Listener at localhost.localdomain/36261] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=44171 2023-07-13 03:16:12,540 DEBUG [Listener at localhost.localdomain/36261] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=44171 2023-07-13 03:16:12,542 DEBUG [Listener at localhost.localdomain/36261] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=44171 2023-07-13 03:16:12,546 DEBUG [Listener at localhost.localdomain/36261] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=44171 2023-07-13 03:16:12,550 INFO [Listener at localhost.localdomain/36261] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-13 03:16:12,550 INFO [Listener at localhost.localdomain/36261] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-13 03:16:12,550 INFO [Listener at localhost.localdomain/36261] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-13 03:16:12,551 INFO [Listener at localhost.localdomain/36261] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-13 03:16:12,551 INFO [Listener at localhost.localdomain/36261] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-13 03:16:12,551 INFO [Listener at localhost.localdomain/36261] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-13 03:16:12,552 INFO [Listener at localhost.localdomain/36261] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-13 03:16:12,553 INFO [Listener at localhost.localdomain/36261] http.HttpServer(1146): Jetty bound to port 37017 2023-07-13 03:16:12,553 INFO [Listener at localhost.localdomain/36261] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-13 03:16:12,591 INFO [Listener at localhost.localdomain/36261] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 03:16:12,591 INFO [Listener at localhost.localdomain/36261] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@45c0801a{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9a468761-9605-9fc9-5826-02909870e5fb/hadoop.log.dir/,AVAILABLE} 2023-07-13 03:16:12,592 INFO [Listener at localhost.localdomain/36261] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 03:16:12,592 INFO [Listener at localhost.localdomain/36261] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@4c7d57dc{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-13 03:16:12,751 INFO [Listener at localhost.localdomain/36261] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-13 03:16:12,753 INFO [Listener at localhost.localdomain/36261] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-13 03:16:12,753 INFO [Listener at localhost.localdomain/36261] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-13 03:16:12,753 INFO [Listener at localhost.localdomain/36261] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-13 03:16:12,755 INFO [Listener at localhost.localdomain/36261] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 03:16:12,757 INFO [Listener at localhost.localdomain/36261] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@7a6083cf{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9a468761-9605-9fc9-5826-02909870e5fb/java.io.tmpdir/jetty-0_0_0_0-37017-hbase-server-2_4_18-SNAPSHOT_jar-_-any-9141997883371192323/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-13 03:16:12,758 INFO [Listener at localhost.localdomain/36261] server.AbstractConnector(333): Started ServerConnector@15e5beaf{HTTP/1.1, (http/1.1)}{0.0.0.0:37017} 2023-07-13 03:16:12,758 INFO [Listener at localhost.localdomain/36261] server.Server(415): Started @8290ms 2023-07-13 03:16:12,776 INFO [Listener at localhost.localdomain/36261] client.ConnectionUtils(127): regionserver/jenkins-hbase20:0 server-side Connection retries=45 2023-07-13 03:16:12,777 INFO [Listener at localhost.localdomain/36261] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 03:16:12,777 INFO [Listener at localhost.localdomain/36261] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-13 03:16:12,777 INFO [Listener at localhost.localdomain/36261] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-13 03:16:12,778 INFO [Listener at localhost.localdomain/36261] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 03:16:12,778 INFO [Listener at localhost.localdomain/36261] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-13 03:16:12,778 INFO [Listener at localhost.localdomain/36261] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-13 03:16:12,780 INFO [Listener at localhost.localdomain/36261] ipc.NettyRpcServer(120): Bind to /148.251.75.209:32993 2023-07-13 03:16:12,781 INFO [Listener at localhost.localdomain/36261] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-13 03:16:12,790 DEBUG [Listener at localhost.localdomain/36261] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-13 03:16:12,792 INFO [Listener at localhost.localdomain/36261] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 03:16:12,794 INFO [Listener at localhost.localdomain/36261] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 03:16:12,796 INFO [Listener at localhost.localdomain/36261] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:32993 connecting to ZooKeeper ensemble=127.0.0.1:56998 2023-07-13 03:16:12,815 DEBUG [Listener at localhost.localdomain/36261-EventThread] zookeeper.ZKWatcher(600): regionserver:329930x0, quorum=127.0.0.1:56998, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-13 03:16:12,816 DEBUG [Listener at localhost.localdomain/36261] zookeeper.ZKUtil(164): regionserver:329930x0, quorum=127.0.0.1:56998, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-13 03:16:12,817 DEBUG [Listener at localhost.localdomain/36261] zookeeper.ZKUtil(164): regionserver:329930x0, quorum=127.0.0.1:56998, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 03:16:12,818 DEBUG [Listener at localhost.localdomain/36261] zookeeper.ZKUtil(164): regionserver:329930x0, quorum=127.0.0.1:56998, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-13 03:16:12,822 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:32993-0x1008454350d0003 connected 2023-07-13 03:16:12,830 DEBUG [Listener at localhost.localdomain/36261] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=32993 2023-07-13 03:16:12,834 DEBUG [Listener at localhost.localdomain/36261] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=32993 2023-07-13 03:16:12,837 DEBUG [Listener at localhost.localdomain/36261] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=32993 2023-07-13 03:16:12,842 DEBUG [Listener at localhost.localdomain/36261] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=32993 2023-07-13 03:16:12,843 DEBUG [Listener at localhost.localdomain/36261] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=32993 2023-07-13 03:16:12,846 INFO [Listener at localhost.localdomain/36261] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-13 03:16:12,847 INFO [Listener at localhost.localdomain/36261] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-13 03:16:12,847 INFO [Listener at localhost.localdomain/36261] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-13 03:16:12,850 INFO [Listener at localhost.localdomain/36261] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-13 03:16:12,850 INFO [Listener at localhost.localdomain/36261] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-13 03:16:12,850 INFO [Listener at localhost.localdomain/36261] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-13 03:16:12,851 INFO [Listener at localhost.localdomain/36261] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-13 03:16:12,852 INFO [Listener at localhost.localdomain/36261] http.HttpServer(1146): Jetty bound to port 34279 2023-07-13 03:16:12,852 INFO [Listener at localhost.localdomain/36261] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-13 03:16:12,887 INFO [Listener at localhost.localdomain/36261] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 03:16:12,888 INFO [Listener at localhost.localdomain/36261] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@47636af2{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9a468761-9605-9fc9-5826-02909870e5fb/hadoop.log.dir/,AVAILABLE} 2023-07-13 03:16:12,888 INFO [Listener at localhost.localdomain/36261] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 03:16:12,889 INFO [Listener at localhost.localdomain/36261] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1c116fa5{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-13 03:16:13,021 INFO [Listener at localhost.localdomain/36261] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-13 03:16:13,023 INFO [Listener at localhost.localdomain/36261] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-13 03:16:13,023 INFO [Listener at localhost.localdomain/36261] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-13 03:16:13,023 INFO [Listener at localhost.localdomain/36261] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-13 03:16:13,036 INFO [Listener at localhost.localdomain/36261] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 03:16:13,037 INFO [Listener at localhost.localdomain/36261] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@187365af{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9a468761-9605-9fc9-5826-02909870e5fb/java.io.tmpdir/jetty-0_0_0_0-34279-hbase-server-2_4_18-SNAPSHOT_jar-_-any-4101701929960293635/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-13 03:16:13,039 INFO [Listener at localhost.localdomain/36261] server.AbstractConnector(333): Started ServerConnector@6a985c61{HTTP/1.1, (http/1.1)}{0.0.0.0:34279} 2023-07-13 03:16:13,039 INFO [Listener at localhost.localdomain/36261] server.Server(415): Started @8571ms 2023-07-13 03:16:13,048 INFO [master/jenkins-hbase20:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-13 03:16:13,072 INFO [master/jenkins-hbase20:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@5e4f6e80{HTTP/1.1, (http/1.1)}{0.0.0.0:39337} 2023-07-13 03:16:13,073 INFO [master/jenkins-hbase20:0:becomeActiveMaster] server.Server(415): Started @8605ms 2023-07-13 03:16:13,073 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase20.apache.org,33491,1689218169949 2023-07-13 03:16:13,090 DEBUG [Listener at localhost.localdomain/36261-EventThread] zookeeper.ZKWatcher(600): master:33491-0x1008454350d0000, quorum=127.0.0.1:56998, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-13 03:16:13,098 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:33491-0x1008454350d0000, quorum=127.0.0.1:56998, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase20.apache.org,33491,1689218169949 2023-07-13 03:16:13,123 DEBUG [Listener at localhost.localdomain/36261-EventThread] zookeeper.ZKWatcher(600): regionserver:32993-0x1008454350d0003, quorum=127.0.0.1:56998, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-13 03:16:13,124 DEBUG [Listener at localhost.localdomain/36261-EventThread] zookeeper.ZKWatcher(600): master:33491-0x1008454350d0000, quorum=127.0.0.1:56998, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-13 03:16:13,124 DEBUG [Listener at localhost.localdomain/36261-EventThread] zookeeper.ZKWatcher(600): master:33491-0x1008454350d0000, quorum=127.0.0.1:56998, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 03:16:13,124 DEBUG [Listener at localhost.localdomain/36261-EventThread] zookeeper.ZKWatcher(600): regionserver:44171-0x1008454350d0002, quorum=127.0.0.1:56998, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-13 03:16:13,127 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:33491-0x1008454350d0000, quorum=127.0.0.1:56998, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-13 03:16:13,130 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:33491-0x1008454350d0000, quorum=127.0.0.1:56998, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-13 03:16:13,130 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase20.apache.org,33491,1689218169949 from backup master directory 2023-07-13 03:16:13,130 DEBUG [Listener at localhost.localdomain/36261-EventThread] zookeeper.ZKWatcher(600): regionserver:37181-0x1008454350d0001, quorum=127.0.0.1:56998, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-13 03:16:13,139 DEBUG [Listener at localhost.localdomain/36261-EventThread] zookeeper.ZKWatcher(600): master:33491-0x1008454350d0000, quorum=127.0.0.1:56998, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase20.apache.org,33491,1689218169949 2023-07-13 03:16:13,139 DEBUG [Listener at localhost.localdomain/36261-EventThread] zookeeper.ZKWatcher(600): master:33491-0x1008454350d0000, quorum=127.0.0.1:56998, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-13 03:16:13,140 WARN [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-13 03:16:13,140 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase20.apache.org,33491,1689218169949 2023-07-13 03:16:13,144 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating data MemStoreChunkPool with chunk size 2 MB, max count 352, initial count 0 2023-07-13 03:16:13,146 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating index MemStoreChunkPool with chunk size 204.80 KB, max count 391, initial count 0 2023-07-13 03:16:13,233 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/hbase.id with ID: 5fc5ba14-f68f-4a86-a6cc-9cf3f3ea9add 2023-07-13 03:16:13,279 INFO [master/jenkins-hbase20:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 03:16:13,297 DEBUG [Listener at localhost.localdomain/36261-EventThread] zookeeper.ZKWatcher(600): master:33491-0x1008454350d0000, quorum=127.0.0.1:56998, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 03:16:13,354 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x1d79db27 to 127.0.0.1:56998 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-13 03:16:13,380 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@64bfb604, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-13 03:16:13,407 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-13 03:16:13,409 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-13 03:16:13,427 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(264): ClientProtocol::create wrong number of arguments, should be hadoop 3.2 or below 2023-07-13 03:16:13,427 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(270): ClientProtocol::create wrong number of arguments, should be hadoop 2.x 2023-07-13 03:16:13,429 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(279): can not find SHOULD_REPLICATE flag, should be hadoop 2.x java.lang.IllegalArgumentException: No enum constant org.apache.hadoop.fs.CreateFlag.SHOULD_REPLICATE at java.lang.Enum.valueOf(Enum.java:238) at org.apache.hadoop.fs.CreateFlag.valueOf(CreateFlag.java:63) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.loadShouldReplicateFlag(FanOutOneBlockAsyncDFSOutputHelper.java:277) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.(FanOutOneBlockAsyncDFSOutputHelper.java:304) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:139) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-13 03:16:13,432 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(243): No decryptEncryptedDataEncryptionKey method in DFSClient, should be hadoop version with HDFS-12396 java.lang.NoSuchMethodException: org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(org.apache.hadoop.fs.FileEncryptionInfo) at java.lang.Class.getDeclaredMethod(Class.java:2130) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelperWithoutHDFS12396(FanOutOneBlockAsyncDFSOutputSaslHelper.java:182) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelper(FanOutOneBlockAsyncDFSOutputSaslHelper.java:241) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.(FanOutOneBlockAsyncDFSOutputSaslHelper.java:252) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:140) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-13 03:16:13,433 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-13 03:16:13,472 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/MasterData/data/master/store-tmp 2023-07-13 03:16:13,517 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 03:16:13,517 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-13 03:16:13,517 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-13 03:16:13,517 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-13 03:16:13,517 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-13 03:16:13,517 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-13 03:16:13,517 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-13 03:16:13,517 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-13 03:16:13,519 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/MasterData/WALs/jenkins-hbase20.apache.org,33491,1689218169949 2023-07-13 03:16:13,539 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C33491%2C1689218169949, suffix=, logDir=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/MasterData/WALs/jenkins-hbase20.apache.org,33491,1689218169949, archiveDir=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/MasterData/oldWALs, maxLogs=10 2023-07-13 03:16:13,592 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43963,DS-72306ca7-9fea-43e7-ac6c-a3e6f88d5ecf,DISK] 2023-07-13 03:16:13,592 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37299,DS-7325d1d6-f32c-4e9f-9d47-b89ecc0dcb96,DISK] 2023-07-13 03:16:13,592 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43409,DS-f2641e55-6772-43f9-9084-b6bc41af5cda,DISK] 2023-07-13 03:16:13,599 DEBUG [RS-EventLoopGroup-5-3] asyncfs.ProtobufDecoder(123): Hadoop 3.2 and below use unshaded protobuf. java.lang.ClassNotFoundException: org.apache.hadoop.thirdparty.protobuf.MessageLite at java.net.URLClassLoader.findClass(URLClassLoader.java:387) at java.lang.ClassLoader.loadClass(ClassLoader.java:418) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) at java.lang.ClassLoader.loadClass(ClassLoader.java:351) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.io.asyncfs.ProtobufDecoder.(ProtobufDecoder.java:118) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.processWriteBlockResponse(FanOutOneBlockAsyncDFSOutputHelper.java:340) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$100(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$4.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:424) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:557) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.addListener(DefaultPromise.java:185) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.initialize(FanOutOneBlockAsyncDFSOutputHelper.java:418) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$300(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:476) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:471) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:583) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:559) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:636) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:625) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:105) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.fulfillConnectPromise(AbstractEpollChannel.java:653) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:691) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-13 03:16:13,661 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/MasterData/WALs/jenkins-hbase20.apache.org,33491,1689218169949/jenkins-hbase20.apache.org%2C33491%2C1689218169949.1689218173549 2023-07-13 03:16:13,662 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37299,DS-7325d1d6-f32c-4e9f-9d47-b89ecc0dcb96,DISK], DatanodeInfoWithStorage[127.0.0.1:43963,DS-72306ca7-9fea-43e7-ac6c-a3e6f88d5ecf,DISK], DatanodeInfoWithStorage[127.0.0.1:43409,DS-f2641e55-6772-43f9-9084-b6bc41af5cda,DISK]] 2023-07-13 03:16:13,663 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-13 03:16:13,663 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 03:16:13,666 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-13 03:16:13,668 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-13 03:16:13,724 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-13 03:16:13,731 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-13 03:16:13,762 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-13 03:16:13,774 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 03:16:13,780 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-13 03:16:13,782 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-13 03:16:13,799 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-13 03:16:13,803 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 03:16:13,805 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9894735200, jitterRate=-0.07848097383975983}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 03:16:13,805 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-13 03:16:13,806 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-13 03:16:13,826 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-13 03:16:13,827 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-13 03:16:13,829 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-13 03:16:13,831 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 1 msec 2023-07-13 03:16:13,861 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 30 msec 2023-07-13 03:16:13,861 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-13 03:16:13,883 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-13 03:16:13,888 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-13 03:16:13,895 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33491-0x1008454350d0000, quorum=127.0.0.1:56998, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-13 03:16:13,900 INFO [master/jenkins-hbase20:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-13 03:16:13,905 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33491-0x1008454350d0000, quorum=127.0.0.1:56998, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-13 03:16:13,908 DEBUG [Listener at localhost.localdomain/36261-EventThread] zookeeper.ZKWatcher(600): master:33491-0x1008454350d0000, quorum=127.0.0.1:56998, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 03:16:13,909 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33491-0x1008454350d0000, quorum=127.0.0.1:56998, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-13 03:16:13,909 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33491-0x1008454350d0000, quorum=127.0.0.1:56998, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-13 03:16:13,922 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33491-0x1008454350d0000, quorum=127.0.0.1:56998, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-13 03:16:13,927 DEBUG [Listener at localhost.localdomain/36261-EventThread] zookeeper.ZKWatcher(600): regionserver:32993-0x1008454350d0003, quorum=127.0.0.1:56998, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-13 03:16:13,927 DEBUG [Listener at localhost.localdomain/36261-EventThread] zookeeper.ZKWatcher(600): master:33491-0x1008454350d0000, quorum=127.0.0.1:56998, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-13 03:16:13,927 DEBUG [Listener at localhost.localdomain/36261-EventThread] zookeeper.ZKWatcher(600): regionserver:44171-0x1008454350d0002, quorum=127.0.0.1:56998, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-13 03:16:13,927 DEBUG [Listener at localhost.localdomain/36261-EventThread] zookeeper.ZKWatcher(600): regionserver:37181-0x1008454350d0001, quorum=127.0.0.1:56998, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-13 03:16:13,927 DEBUG [Listener at localhost.localdomain/36261-EventThread] zookeeper.ZKWatcher(600): master:33491-0x1008454350d0000, quorum=127.0.0.1:56998, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 03:16:13,928 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase20.apache.org,33491,1689218169949, sessionid=0x1008454350d0000, setting cluster-up flag (Was=false) 2023-07-13 03:16:13,944 DEBUG [Listener at localhost.localdomain/36261-EventThread] zookeeper.ZKWatcher(600): master:33491-0x1008454350d0000, quorum=127.0.0.1:56998, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 03:16:13,948 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-13 03:16:13,949 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase20.apache.org,33491,1689218169949 2023-07-13 03:16:13,957 DEBUG [Listener at localhost.localdomain/36261-EventThread] zookeeper.ZKWatcher(600): master:33491-0x1008454350d0000, quorum=127.0.0.1:56998, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 03:16:13,960 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-13 03:16:13,962 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase20.apache.org,33491,1689218169949 2023-07-13 03:16:13,965 WARN [master/jenkins-hbase20:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.hbase-snapshot/.tmp 2023-07-13 03:16:14,041 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-13 03:16:14,043 INFO [RS:2;jenkins-hbase20:32993] regionserver.HRegionServer(951): ClusterId : 5fc5ba14-f68f-4a86-a6cc-9cf3f3ea9add 2023-07-13 03:16:14,043 INFO [RS:0;jenkins-hbase20:37181] regionserver.HRegionServer(951): ClusterId : 5fc5ba14-f68f-4a86-a6cc-9cf3f3ea9add 2023-07-13 03:16:14,043 INFO [RS:1;jenkins-hbase20:44171] regionserver.HRegionServer(951): ClusterId : 5fc5ba14-f68f-4a86-a6cc-9cf3f3ea9add 2023-07-13 03:16:14,049 DEBUG [RS:0;jenkins-hbase20:37181] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-13 03:16:14,049 DEBUG [RS:2;jenkins-hbase20:32993] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-13 03:16:14,049 DEBUG [RS:1;jenkins-hbase20:44171] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-13 03:16:14,053 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-13 03:16:14,055 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,33491,1689218169949] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-13 03:16:14,055 DEBUG [RS:1;jenkins-hbase20:44171] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-13 03:16:14,055 DEBUG [RS:2;jenkins-hbase20:32993] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-13 03:16:14,055 DEBUG [RS:0;jenkins-hbase20:37181] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-13 03:16:14,055 DEBUG [RS:2;jenkins-hbase20:32993] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-13 03:16:14,055 DEBUG [RS:1;jenkins-hbase20:44171] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-13 03:16:14,056 DEBUG [RS:0;jenkins-hbase20:37181] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-13 03:16:14,057 INFO [master/jenkins-hbase20:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-13 03:16:14,058 INFO [master/jenkins-hbase20:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-13 03:16:14,059 DEBUG [RS:0;jenkins-hbase20:37181] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-13 03:16:14,059 DEBUG [RS:1;jenkins-hbase20:44171] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-13 03:16:14,059 DEBUG [RS:2;jenkins-hbase20:32993] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-13 03:16:14,062 DEBUG [RS:0;jenkins-hbase20:37181] zookeeper.ReadOnlyZKClient(139): Connect 0x60d1f91a to 127.0.0.1:56998 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-13 03:16:14,063 DEBUG [RS:1;jenkins-hbase20:44171] zookeeper.ReadOnlyZKClient(139): Connect 0x775d90c9 to 127.0.0.1:56998 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-13 03:16:14,063 DEBUG [RS:2;jenkins-hbase20:32993] zookeeper.ReadOnlyZKClient(139): Connect 0x50f77fb1 to 127.0.0.1:56998 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-13 03:16:14,073 DEBUG [RS:2;jenkins-hbase20:32993] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@597b4f4, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-13 03:16:14,073 DEBUG [RS:0;jenkins-hbase20:37181] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@58bf6ee7, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-13 03:16:14,074 DEBUG [RS:2;jenkins-hbase20:32993] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@286393e5, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-07-13 03:16:14,074 DEBUG [RS:0;jenkins-hbase20:37181] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@62321301, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-07-13 03:16:14,075 DEBUG [RS:1;jenkins-hbase20:44171] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@bed3fcd, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-13 03:16:14,075 DEBUG [RS:1;jenkins-hbase20:44171] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5ab218e6, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-07-13 03:16:14,099 DEBUG [RS:1;jenkins-hbase20:44171] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase20:44171 2023-07-13 03:16:14,099 DEBUG [RS:0;jenkins-hbase20:37181] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase20:37181 2023-07-13 03:16:14,100 DEBUG [RS:2;jenkins-hbase20:32993] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase20:32993 2023-07-13 03:16:14,105 INFO [RS:1;jenkins-hbase20:44171] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-13 03:16:14,106 INFO [RS:0;jenkins-hbase20:37181] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-13 03:16:14,106 INFO [RS:0;jenkins-hbase20:37181] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-13 03:16:14,106 DEBUG [RS:0;jenkins-hbase20:37181] regionserver.HRegionServer(1022): About to register with Master. 2023-07-13 03:16:14,105 INFO [RS:2;jenkins-hbase20:32993] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-13 03:16:14,109 INFO [RS:2;jenkins-hbase20:32993] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-13 03:16:14,106 INFO [RS:1;jenkins-hbase20:44171] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-13 03:16:14,109 DEBUG [RS:2;jenkins-hbase20:32993] regionserver.HRegionServer(1022): About to register with Master. 2023-07-13 03:16:14,109 DEBUG [RS:1;jenkins-hbase20:44171] regionserver.HRegionServer(1022): About to register with Master. 2023-07-13 03:16:14,110 INFO [RS:2;jenkins-hbase20:32993] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase20.apache.org,33491,1689218169949 with isa=jenkins-hbase20.apache.org/148.251.75.209:32993, startcode=1689218172776 2023-07-13 03:16:14,110 INFO [RS:0;jenkins-hbase20:37181] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase20.apache.org,33491,1689218169949 with isa=jenkins-hbase20.apache.org/148.251.75.209:37181, startcode=1689218172183 2023-07-13 03:16:14,110 INFO [RS:1;jenkins-hbase20:44171] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase20.apache.org,33491,1689218169949 with isa=jenkins-hbase20.apache.org/148.251.75.209:44171, startcode=1689218172445 2023-07-13 03:16:14,153 DEBUG [RS:1;jenkins-hbase20:44171] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-13 03:16:14,153 DEBUG [RS:2;jenkins-hbase20:32993] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-13 03:16:14,153 DEBUG [RS:0;jenkins-hbase20:37181] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-13 03:16:14,175 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-13 03:16:14,224 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-13 03:16:14,226 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:51123, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.2 (auth:SIMPLE), service=RegionServerStatusService 2023-07-13 03:16:14,226 INFO [RS-EventLoopGroup-1-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:60433, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.1 (auth:SIMPLE), service=RegionServerStatusService 2023-07-13 03:16:14,226 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:48087, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.0 (auth:SIMPLE), service=RegionServerStatusService 2023-07-13 03:16:14,230 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-13 03:16:14,231 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-13 03:16:14,231 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-13 03:16:14,240 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-07-13 03:16:14,240 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-07-13 03:16:14,240 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-07-13 03:16:14,240 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-07-13 03:16:14,240 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase20:0, corePoolSize=10, maxPoolSize=10 2023-07-13 03:16:14,241 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:14,241 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-07-13 03:16:14,241 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:14,243 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=33491] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 03:16:14,251 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=33491] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 03:16:14,254 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=33491] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 03:16:14,254 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689218204254 2023-07-13 03:16:14,258 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-13 03:16:14,260 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-13 03:16:14,260 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-13 03:16:14,269 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-13 03:16:14,269 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-13 03:16:14,279 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-13 03:16:14,279 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-13 03:16:14,280 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-13 03:16:14,280 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-13 03:16:14,281 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:14,283 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-13 03:16:14,286 DEBUG [RS:0;jenkins-hbase20:37181] regionserver.HRegionServer(2830): Master is not running yet 2023-07-13 03:16:14,286 DEBUG [RS:2;jenkins-hbase20:32993] regionserver.HRegionServer(2830): Master is not running yet 2023-07-13 03:16:14,286 WARN [RS:0;jenkins-hbase20:37181] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-13 03:16:14,286 DEBUG [RS:1;jenkins-hbase20:44171] regionserver.HRegionServer(2830): Master is not running yet 2023-07-13 03:16:14,286 WARN [RS:2;jenkins-hbase20:32993] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-13 03:16:14,287 WARN [RS:1;jenkins-hbase20:44171] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-13 03:16:14,287 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-13 03:16:14,287 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-13 03:16:14,290 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-13 03:16:14,291 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-13 03:16:14,294 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1689218174293,5,FailOnTimeoutGroup] 2023-07-13 03:16:14,295 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1689218174294,5,FailOnTimeoutGroup] 2023-07-13 03:16:14,295 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:14,295 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-13 03:16:14,298 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:14,298 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:14,352 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-13 03:16:14,353 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-13 03:16:14,354 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692 2023-07-13 03:16:14,383 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 03:16:14,386 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-13 03:16:14,388 INFO [RS:1;jenkins-hbase20:44171] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase20.apache.org,33491,1689218169949 with isa=jenkins-hbase20.apache.org/148.251.75.209:44171, startcode=1689218172445 2023-07-13 03:16:14,388 INFO [RS:2;jenkins-hbase20:32993] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase20.apache.org,33491,1689218169949 with isa=jenkins-hbase20.apache.org/148.251.75.209:32993, startcode=1689218172776 2023-07-13 03:16:14,388 INFO [RS:0;jenkins-hbase20:37181] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase20.apache.org,33491,1689218169949 with isa=jenkins-hbase20.apache.org/148.251.75.209:37181, startcode=1689218172183 2023-07-13 03:16:14,391 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/meta/1588230740/info 2023-07-13 03:16:14,392 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-13 03:16:14,394 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 03:16:14,394 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=33491] master.ServerManager(394): Registering regionserver=jenkins-hbase20.apache.org,37181,1689218172183 2023-07-13 03:16:14,394 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-13 03:16:14,397 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,33491,1689218169949] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-13 03:16:14,398 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,33491,1689218169949] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-13 03:16:14,400 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/meta/1588230740/rep_barrier 2023-07-13 03:16:14,401 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-13 03:16:14,402 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 03:16:14,403 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-13 03:16:14,404 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=33491] master.ServerManager(394): Registering regionserver=jenkins-hbase20.apache.org,44171,1689218172445 2023-07-13 03:16:14,404 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,33491,1689218169949] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-13 03:16:14,405 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,33491,1689218169949] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-13 03:16:14,405 DEBUG [RS:0;jenkins-hbase20:37181] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692 2023-07-13 03:16:14,405 DEBUG [RS:0;jenkins-hbase20:37181] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:34135 2023-07-13 03:16:14,405 DEBUG [RS:0;jenkins-hbase20:37181] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=33253 2023-07-13 03:16:14,407 DEBUG [RS:1;jenkins-hbase20:44171] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692 2023-07-13 03:16:14,407 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=33491] master.ServerManager(394): Registering regionserver=jenkins-hbase20.apache.org,32993,1689218172776 2023-07-13 03:16:14,407 DEBUG [RS:1;jenkins-hbase20:44171] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:34135 2023-07-13 03:16:14,407 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,33491,1689218169949] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-13 03:16:14,408 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,33491,1689218169949] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-13 03:16:14,407 DEBUG [RS:1;jenkins-hbase20:44171] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=33253 2023-07-13 03:16:14,409 DEBUG [RS:2;jenkins-hbase20:32993] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692 2023-07-13 03:16:14,409 DEBUG [RS:2;jenkins-hbase20:32993] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:34135 2023-07-13 03:16:14,409 DEBUG [RS:2;jenkins-hbase20:32993] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=33253 2023-07-13 03:16:14,412 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/meta/1588230740/table 2023-07-13 03:16:14,412 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-13 03:16:14,413 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 03:16:14,415 DEBUG [Listener at localhost.localdomain/36261-EventThread] zookeeper.ZKWatcher(600): master:33491-0x1008454350d0000, quorum=127.0.0.1:56998, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 03:16:14,416 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/meta/1588230740 2023-07-13 03:16:14,418 DEBUG [RS:1;jenkins-hbase20:44171] zookeeper.ZKUtil(162): regionserver:44171-0x1008454350d0002, quorum=127.0.0.1:56998, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,44171,1689218172445 2023-07-13 03:16:14,419 WARN [RS:1;jenkins-hbase20:44171] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-13 03:16:14,419 INFO [RS:1;jenkins-hbase20:44171] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-13 03:16:14,420 DEBUG [RS:1;jenkins-hbase20:44171] regionserver.HRegionServer(1948): logDir=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/WALs/jenkins-hbase20.apache.org,44171,1689218172445 2023-07-13 03:16:14,420 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase20.apache.org,37181,1689218172183] 2023-07-13 03:16:14,420 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase20.apache.org,44171,1689218172445] 2023-07-13 03:16:14,421 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase20.apache.org,32993,1689218172776] 2023-07-13 03:16:14,421 DEBUG [RS:2;jenkins-hbase20:32993] zookeeper.ZKUtil(162): regionserver:32993-0x1008454350d0003, quorum=127.0.0.1:56998, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,32993,1689218172776 2023-07-13 03:16:14,421 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/meta/1588230740 2023-07-13 03:16:14,421 DEBUG [RS:0;jenkins-hbase20:37181] zookeeper.ZKUtil(162): regionserver:37181-0x1008454350d0001, quorum=127.0.0.1:56998, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,37181,1689218172183 2023-07-13 03:16:14,425 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-13 03:16:14,428 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-13 03:16:14,421 WARN [RS:2;jenkins-hbase20:32993] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-13 03:16:14,421 WARN [RS:0;jenkins-hbase20:37181] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-13 03:16:14,437 INFO [RS:0;jenkins-hbase20:37181] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-13 03:16:14,438 INFO [RS:2;jenkins-hbase20:32993] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-13 03:16:14,438 DEBUG [RS:0;jenkins-hbase20:37181] regionserver.HRegionServer(1948): logDir=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/WALs/jenkins-hbase20.apache.org,37181,1689218172183 2023-07-13 03:16:14,438 DEBUG [RS:2;jenkins-hbase20:32993] regionserver.HRegionServer(1948): logDir=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/WALs/jenkins-hbase20.apache.org,32993,1689218172776 2023-07-13 03:16:14,440 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 03:16:14,445 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11942516480, jitterRate=0.11223351955413818}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-13 03:16:14,445 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-13 03:16:14,445 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-13 03:16:14,446 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-13 03:16:14,446 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-13 03:16:14,446 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-13 03:16:14,446 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-13 03:16:14,448 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-13 03:16:14,449 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-13 03:16:14,452 DEBUG [RS:2;jenkins-hbase20:32993] zookeeper.ZKUtil(162): regionserver:32993-0x1008454350d0003, quorum=127.0.0.1:56998, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,44171,1689218172445 2023-07-13 03:16:14,452 DEBUG [RS:0;jenkins-hbase20:37181] zookeeper.ZKUtil(162): regionserver:37181-0x1008454350d0001, quorum=127.0.0.1:56998, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,44171,1689218172445 2023-07-13 03:16:14,453 DEBUG [RS:2;jenkins-hbase20:32993] zookeeper.ZKUtil(162): regionserver:32993-0x1008454350d0003, quorum=127.0.0.1:56998, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,37181,1689218172183 2023-07-13 03:16:14,452 DEBUG [RS:1;jenkins-hbase20:44171] zookeeper.ZKUtil(162): regionserver:44171-0x1008454350d0002, quorum=127.0.0.1:56998, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,44171,1689218172445 2023-07-13 03:16:14,453 DEBUG [RS:0;jenkins-hbase20:37181] zookeeper.ZKUtil(162): regionserver:37181-0x1008454350d0001, quorum=127.0.0.1:56998, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,37181,1689218172183 2023-07-13 03:16:14,453 DEBUG [RS:1;jenkins-hbase20:44171] zookeeper.ZKUtil(162): regionserver:44171-0x1008454350d0002, quorum=127.0.0.1:56998, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,37181,1689218172183 2023-07-13 03:16:14,453 DEBUG [RS:0;jenkins-hbase20:37181] zookeeper.ZKUtil(162): regionserver:37181-0x1008454350d0001, quorum=127.0.0.1:56998, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,32993,1689218172776 2023-07-13 03:16:14,453 DEBUG [RS:2;jenkins-hbase20:32993] zookeeper.ZKUtil(162): regionserver:32993-0x1008454350d0003, quorum=127.0.0.1:56998, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,32993,1689218172776 2023-07-13 03:16:14,454 DEBUG [RS:1;jenkins-hbase20:44171] zookeeper.ZKUtil(162): regionserver:44171-0x1008454350d0002, quorum=127.0.0.1:56998, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,32993,1689218172776 2023-07-13 03:16:14,462 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-13 03:16:14,463 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-13 03:16:14,465 DEBUG [RS:2;jenkins-hbase20:32993] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-13 03:16:14,465 DEBUG [RS:1;jenkins-hbase20:44171] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-13 03:16:14,465 DEBUG [RS:0;jenkins-hbase20:37181] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-13 03:16:14,472 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-13 03:16:14,476 INFO [RS:2;jenkins-hbase20:32993] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-13 03:16:14,476 INFO [RS:1;jenkins-hbase20:44171] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-13 03:16:14,476 INFO [RS:0;jenkins-hbase20:37181] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-13 03:16:14,485 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-13 03:16:14,490 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-13 03:16:14,500 INFO [RS:2;jenkins-hbase20:32993] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-13 03:16:14,500 INFO [RS:0;jenkins-hbase20:37181] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-13 03:16:14,500 INFO [RS:1;jenkins-hbase20:44171] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-13 03:16:14,506 INFO [RS:1;jenkins-hbase20:44171] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-13 03:16:14,506 INFO [RS:2;jenkins-hbase20:32993] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-13 03:16:14,506 INFO [RS:0;jenkins-hbase20:37181] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-13 03:16:14,507 INFO [RS:2;jenkins-hbase20:32993] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:14,507 INFO [RS:1;jenkins-hbase20:44171] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:14,507 INFO [RS:0;jenkins-hbase20:37181] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:14,509 INFO [RS:2;jenkins-hbase20:32993] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-13 03:16:14,509 INFO [RS:1;jenkins-hbase20:44171] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-13 03:16:14,509 INFO [RS:0;jenkins-hbase20:37181] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-13 03:16:14,517 INFO [RS:0;jenkins-hbase20:37181] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:14,517 INFO [RS:2;jenkins-hbase20:32993] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:14,517 INFO [RS:1;jenkins-hbase20:44171] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:14,517 DEBUG [RS:0;jenkins-hbase20:37181] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:14,518 DEBUG [RS:2;jenkins-hbase20:32993] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:14,518 DEBUG [RS:0;jenkins-hbase20:37181] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:14,518 DEBUG [RS:1;jenkins-hbase20:44171] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:14,518 DEBUG [RS:0;jenkins-hbase20:37181] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:14,518 DEBUG [RS:1;jenkins-hbase20:44171] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:14,518 DEBUG [RS:2;jenkins-hbase20:32993] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:14,518 DEBUG [RS:1;jenkins-hbase20:44171] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:14,518 DEBUG [RS:0;jenkins-hbase20:37181] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:14,519 DEBUG [RS:1;jenkins-hbase20:44171] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:14,519 DEBUG [RS:0;jenkins-hbase20:37181] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:14,519 DEBUG [RS:1;jenkins-hbase20:44171] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:14,518 DEBUG [RS:2;jenkins-hbase20:32993] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:14,519 DEBUG [RS:1;jenkins-hbase20:44171] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-07-13 03:16:14,519 DEBUG [RS:2;jenkins-hbase20:32993] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:14,519 DEBUG [RS:1;jenkins-hbase20:44171] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:14,519 DEBUG [RS:2;jenkins-hbase20:32993] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:14,519 DEBUG [RS:0;jenkins-hbase20:37181] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-07-13 03:16:14,519 DEBUG [RS:2;jenkins-hbase20:32993] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-07-13 03:16:14,519 DEBUG [RS:0;jenkins-hbase20:37181] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:14,519 DEBUG [RS:2;jenkins-hbase20:32993] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:14,519 DEBUG [RS:0;jenkins-hbase20:37181] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:14,519 DEBUG [RS:2;jenkins-hbase20:32993] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:14,519 DEBUG [RS:1;jenkins-hbase20:44171] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:14,519 DEBUG [RS:2;jenkins-hbase20:32993] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:14,519 DEBUG [RS:0;jenkins-hbase20:37181] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:14,520 DEBUG [RS:2;jenkins-hbase20:32993] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:14,520 DEBUG [RS:0;jenkins-hbase20:37181] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:14,520 DEBUG [RS:1;jenkins-hbase20:44171] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:14,520 DEBUG [RS:1;jenkins-hbase20:44171] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:14,521 INFO [RS:2;jenkins-hbase20:32993] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:14,522 INFO [RS:0;jenkins-hbase20:37181] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:14,522 INFO [RS:2;jenkins-hbase20:32993] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:14,522 INFO [RS:0;jenkins-hbase20:37181] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:14,522 INFO [RS:2;jenkins-hbase20:32993] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:14,522 INFO [RS:0;jenkins-hbase20:37181] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:14,522 INFO [RS:1;jenkins-hbase20:44171] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:14,522 INFO [RS:1;jenkins-hbase20:44171] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:14,523 INFO [RS:1;jenkins-hbase20:44171] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:14,544 INFO [RS:2;jenkins-hbase20:32993] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-13 03:16:14,544 INFO [RS:1;jenkins-hbase20:44171] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-13 03:16:14,544 INFO [RS:0;jenkins-hbase20:37181] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-13 03:16:14,547 INFO [RS:2;jenkins-hbase20:32993] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,32993,1689218172776-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:14,547 INFO [RS:1;jenkins-hbase20:44171] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,44171,1689218172445-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:14,547 INFO [RS:0;jenkins-hbase20:37181] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,37181,1689218172183-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:14,567 INFO [RS:2;jenkins-hbase20:32993] regionserver.Replication(203): jenkins-hbase20.apache.org,32993,1689218172776 started 2023-07-13 03:16:14,567 INFO [RS:2;jenkins-hbase20:32993] regionserver.HRegionServer(1637): Serving as jenkins-hbase20.apache.org,32993,1689218172776, RpcServer on jenkins-hbase20.apache.org/148.251.75.209:32993, sessionid=0x1008454350d0003 2023-07-13 03:16:14,568 INFO [RS:0;jenkins-hbase20:37181] regionserver.Replication(203): jenkins-hbase20.apache.org,37181,1689218172183 started 2023-07-13 03:16:14,568 INFO [RS:0;jenkins-hbase20:37181] regionserver.HRegionServer(1637): Serving as jenkins-hbase20.apache.org,37181,1689218172183, RpcServer on jenkins-hbase20.apache.org/148.251.75.209:37181, sessionid=0x1008454350d0001 2023-07-13 03:16:14,568 DEBUG [RS:2;jenkins-hbase20:32993] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-13 03:16:14,568 DEBUG [RS:0;jenkins-hbase20:37181] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-13 03:16:14,568 DEBUG [RS:2;jenkins-hbase20:32993] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase20.apache.org,32993,1689218172776 2023-07-13 03:16:14,568 DEBUG [RS:0;jenkins-hbase20:37181] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase20.apache.org,37181,1689218172183 2023-07-13 03:16:14,569 DEBUG [RS:2;jenkins-hbase20:32993] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,32993,1689218172776' 2023-07-13 03:16:14,569 DEBUG [RS:0;jenkins-hbase20:37181] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,37181,1689218172183' 2023-07-13 03:16:14,569 DEBUG [RS:0;jenkins-hbase20:37181] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-13 03:16:14,569 DEBUG [RS:2;jenkins-hbase20:32993] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-13 03:16:14,570 DEBUG [RS:2;jenkins-hbase20:32993] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-13 03:16:14,570 DEBUG [RS:0;jenkins-hbase20:37181] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-13 03:16:14,570 INFO [RS:1;jenkins-hbase20:44171] regionserver.Replication(203): jenkins-hbase20.apache.org,44171,1689218172445 started 2023-07-13 03:16:14,570 INFO [RS:1;jenkins-hbase20:44171] regionserver.HRegionServer(1637): Serving as jenkins-hbase20.apache.org,44171,1689218172445, RpcServer on jenkins-hbase20.apache.org/148.251.75.209:44171, sessionid=0x1008454350d0002 2023-07-13 03:16:14,571 DEBUG [RS:1;jenkins-hbase20:44171] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-13 03:16:14,571 DEBUG [RS:1;jenkins-hbase20:44171] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase20.apache.org,44171,1689218172445 2023-07-13 03:16:14,571 DEBUG [RS:2;jenkins-hbase20:32993] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-13 03:16:14,571 DEBUG [RS:1;jenkins-hbase20:44171] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,44171,1689218172445' 2023-07-13 03:16:14,571 DEBUG [RS:0;jenkins-hbase20:37181] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-13 03:16:14,571 DEBUG [RS:2;jenkins-hbase20:32993] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-13 03:16:14,571 DEBUG [RS:0;jenkins-hbase20:37181] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-13 03:16:14,571 DEBUG [RS:1;jenkins-hbase20:44171] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-13 03:16:14,571 DEBUG [RS:0;jenkins-hbase20:37181] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase20.apache.org,37181,1689218172183 2023-07-13 03:16:14,571 DEBUG [RS:2;jenkins-hbase20:32993] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase20.apache.org,32993,1689218172776 2023-07-13 03:16:14,572 DEBUG [RS:2;jenkins-hbase20:32993] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,32993,1689218172776' 2023-07-13 03:16:14,572 DEBUG [RS:0;jenkins-hbase20:37181] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,37181,1689218172183' 2023-07-13 03:16:14,572 DEBUG [RS:2;jenkins-hbase20:32993] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-13 03:16:14,572 DEBUG [RS:0;jenkins-hbase20:37181] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-13 03:16:14,572 DEBUG [RS:1;jenkins-hbase20:44171] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-13 03:16:14,575 DEBUG [RS:0;jenkins-hbase20:37181] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-13 03:16:14,575 DEBUG [RS:2;jenkins-hbase20:32993] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-13 03:16:14,575 DEBUG [RS:1;jenkins-hbase20:44171] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-13 03:16:14,575 DEBUG [RS:1;jenkins-hbase20:44171] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-13 03:16:14,575 DEBUG [RS:1;jenkins-hbase20:44171] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase20.apache.org,44171,1689218172445 2023-07-13 03:16:14,575 DEBUG [RS:0;jenkins-hbase20:37181] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-13 03:16:14,575 DEBUG [RS:1;jenkins-hbase20:44171] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,44171,1689218172445' 2023-07-13 03:16:14,575 DEBUG [RS:2;jenkins-hbase20:32993] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-13 03:16:14,575 DEBUG [RS:1;jenkins-hbase20:44171] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-13 03:16:14,576 INFO [RS:2;jenkins-hbase20:32993] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-13 03:16:14,575 INFO [RS:0;jenkins-hbase20:37181] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-13 03:16:14,577 INFO [RS:2;jenkins-hbase20:32993] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-13 03:16:14,577 INFO [RS:0;jenkins-hbase20:37181] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-13 03:16:14,578 DEBUG [RS:1;jenkins-hbase20:44171] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-13 03:16:14,578 DEBUG [RS:1;jenkins-hbase20:44171] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-13 03:16:14,578 INFO [RS:1;jenkins-hbase20:44171] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-13 03:16:14,578 INFO [RS:1;jenkins-hbase20:44171] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-13 03:16:14,642 DEBUG [jenkins-hbase20:33491] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-13 03:16:14,660 DEBUG [jenkins-hbase20:33491] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-13 03:16:14,661 DEBUG [jenkins-hbase20:33491] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-13 03:16:14,661 DEBUG [jenkins-hbase20:33491] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-13 03:16:14,661 DEBUG [jenkins-hbase20:33491] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-13 03:16:14,662 DEBUG [jenkins-hbase20:33491] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-13 03:16:14,668 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,37181,1689218172183, state=OPENING 2023-07-13 03:16:14,677 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-13 03:16:14,679 DEBUG [Listener at localhost.localdomain/36261-EventThread] zookeeper.ZKWatcher(600): master:33491-0x1008454350d0000, quorum=127.0.0.1:56998, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 03:16:14,679 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-13 03:16:14,687 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,37181,1689218172183}] 2023-07-13 03:16:14,691 INFO [RS:0;jenkins-hbase20:37181] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C37181%2C1689218172183, suffix=, logDir=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/WALs/jenkins-hbase20.apache.org,37181,1689218172183, archiveDir=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/oldWALs, maxLogs=32 2023-07-13 03:16:14,696 INFO [RS:2;jenkins-hbase20:32993] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C32993%2C1689218172776, suffix=, logDir=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/WALs/jenkins-hbase20.apache.org,32993,1689218172776, archiveDir=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/oldWALs, maxLogs=32 2023-07-13 03:16:14,697 INFO [RS:1;jenkins-hbase20:44171] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C44171%2C1689218172445, suffix=, logDir=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/WALs/jenkins-hbase20.apache.org,44171,1689218172445, archiveDir=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/oldWALs, maxLogs=32 2023-07-13 03:16:14,736 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37299,DS-7325d1d6-f32c-4e9f-9d47-b89ecc0dcb96,DISK] 2023-07-13 03:16:14,739 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43963,DS-72306ca7-9fea-43e7-ac6c-a3e6f88d5ecf,DISK] 2023-07-13 03:16:14,739 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43409,DS-f2641e55-6772-43f9-9084-b6bc41af5cda,DISK] 2023-07-13 03:16:14,740 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37299,DS-7325d1d6-f32c-4e9f-9d47-b89ecc0dcb96,DISK] 2023-07-13 03:16:14,741 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43409,DS-f2641e55-6772-43f9-9084-b6bc41af5cda,DISK] 2023-07-13 03:16:14,741 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43963,DS-72306ca7-9fea-43e7-ac6c-a3e6f88d5ecf,DISK] 2023-07-13 03:16:14,748 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43963,DS-72306ca7-9fea-43e7-ac6c-a3e6f88d5ecf,DISK] 2023-07-13 03:16:14,748 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43409,DS-f2641e55-6772-43f9-9084-b6bc41af5cda,DISK] 2023-07-13 03:16:14,748 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37299,DS-7325d1d6-f32c-4e9f-9d47-b89ecc0dcb96,DISK] 2023-07-13 03:16:14,759 INFO [RS:0;jenkins-hbase20:37181] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/WALs/jenkins-hbase20.apache.org,37181,1689218172183/jenkins-hbase20.apache.org%2C37181%2C1689218172183.1689218174694 2023-07-13 03:16:14,759 INFO [RS:1;jenkins-hbase20:44171] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/WALs/jenkins-hbase20.apache.org,44171,1689218172445/jenkins-hbase20.apache.org%2C44171%2C1689218172445.1689218174698 2023-07-13 03:16:14,760 DEBUG [RS:0;jenkins-hbase20:37181] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37299,DS-7325d1d6-f32c-4e9f-9d47-b89ecc0dcb96,DISK], DatanodeInfoWithStorage[127.0.0.1:43409,DS-f2641e55-6772-43f9-9084-b6bc41af5cda,DISK], DatanodeInfoWithStorage[127.0.0.1:43963,DS-72306ca7-9fea-43e7-ac6c-a3e6f88d5ecf,DISK]] 2023-07-13 03:16:14,763 DEBUG [RS:1;jenkins-hbase20:44171] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37299,DS-7325d1d6-f32c-4e9f-9d47-b89ecc0dcb96,DISK], DatanodeInfoWithStorage[127.0.0.1:43963,DS-72306ca7-9fea-43e7-ac6c-a3e6f88d5ecf,DISK], DatanodeInfoWithStorage[127.0.0.1:43409,DS-f2641e55-6772-43f9-9084-b6bc41af5cda,DISK]] 2023-07-13 03:16:14,765 INFO [RS:2;jenkins-hbase20:32993] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/WALs/jenkins-hbase20.apache.org,32993,1689218172776/jenkins-hbase20.apache.org%2C32993%2C1689218172776.1689218174698 2023-07-13 03:16:14,765 DEBUG [RS:2;jenkins-hbase20:32993] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37299,DS-7325d1d6-f32c-4e9f-9d47-b89ecc0dcb96,DISK], DatanodeInfoWithStorage[127.0.0.1:43963,DS-72306ca7-9fea-43e7-ac6c-a3e6f88d5ecf,DISK], DatanodeInfoWithStorage[127.0.0.1:43409,DS-f2641e55-6772-43f9-9084-b6bc41af5cda,DISK]] 2023-07-13 03:16:14,834 WARN [ReadOnlyZKClient-127.0.0.1:56998@0x1d79db27] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-13 03:16:14,855 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase20.apache.org,33491,1689218169949] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-13 03:16:14,860 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:51378, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-13 03:16:14,861 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=37181] ipc.CallRunner(144): callId: 0 service: ClientService methodName: Get size: 88 connection: 148.251.75.209:51378 deadline: 1689218234861, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase20.apache.org,37181,1689218172183 2023-07-13 03:16:14,872 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase20.apache.org,37181,1689218172183 2023-07-13 03:16:14,876 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-13 03:16:14,883 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:51388, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-13 03:16:14,897 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-13 03:16:14,898 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-13 03:16:14,902 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C37181%2C1689218172183.meta, suffix=.meta, logDir=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/WALs/jenkins-hbase20.apache.org,37181,1689218172183, archiveDir=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/oldWALs, maxLogs=32 2023-07-13 03:16:14,926 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37299,DS-7325d1d6-f32c-4e9f-9d47-b89ecc0dcb96,DISK] 2023-07-13 03:16:14,928 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43409,DS-f2641e55-6772-43f9-9084-b6bc41af5cda,DISK] 2023-07-13 03:16:14,930 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43963,DS-72306ca7-9fea-43e7-ac6c-a3e6f88d5ecf,DISK] 2023-07-13 03:16:14,948 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/WALs/jenkins-hbase20.apache.org,37181,1689218172183/jenkins-hbase20.apache.org%2C37181%2C1689218172183.meta.1689218174904.meta 2023-07-13 03:16:14,951 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37299,DS-7325d1d6-f32c-4e9f-9d47-b89ecc0dcb96,DISK], DatanodeInfoWithStorage[127.0.0.1:43409,DS-f2641e55-6772-43f9-9084-b6bc41af5cda,DISK], DatanodeInfoWithStorage[127.0.0.1:43963,DS-72306ca7-9fea-43e7-ac6c-a3e6f88d5ecf,DISK]] 2023-07-13 03:16:14,952 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-13 03:16:14,954 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-13 03:16:14,956 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-13 03:16:14,958 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-13 03:16:14,963 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-13 03:16:14,963 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 03:16:14,963 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-13 03:16:14,963 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-13 03:16:14,967 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-13 03:16:14,968 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/meta/1588230740/info 2023-07-13 03:16:14,969 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/meta/1588230740/info 2023-07-13 03:16:14,969 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-13 03:16:14,970 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 03:16:14,970 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-13 03:16:14,971 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/meta/1588230740/rep_barrier 2023-07-13 03:16:14,971 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/meta/1588230740/rep_barrier 2023-07-13 03:16:14,972 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-13 03:16:14,973 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 03:16:14,973 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-13 03:16:14,975 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/meta/1588230740/table 2023-07-13 03:16:14,975 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/meta/1588230740/table 2023-07-13 03:16:14,976 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-13 03:16:14,976 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 03:16:14,978 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/meta/1588230740 2023-07-13 03:16:14,981 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/meta/1588230740 2023-07-13 03:16:14,985 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-13 03:16:14,989 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-13 03:16:14,999 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11111845120, jitterRate=0.03487122058868408}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-13 03:16:14,999 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-13 03:16:15,009 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689218174869 2023-07-13 03:16:15,030 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-13 03:16:15,032 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-13 03:16:15,032 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,37181,1689218172183, state=OPEN 2023-07-13 03:16:15,035 DEBUG [Listener at localhost.localdomain/36261-EventThread] zookeeper.ZKWatcher(600): master:33491-0x1008454350d0000, quorum=127.0.0.1:56998, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-13 03:16:15,036 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-13 03:16:15,041 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-13 03:16:15,041 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,37181,1689218172183 in 348 msec 2023-07-13 03:16:15,049 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-13 03:16:15,049 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 571 msec 2023-07-13 03:16:15,056 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 982 msec 2023-07-13 03:16:15,057 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689218175057, completionTime=-1 2023-07-13 03:16:15,057 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-13 03:16:15,057 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-13 03:16:15,124 INFO [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-13 03:16:15,125 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689218235125 2023-07-13 03:16:15,125 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689218295125 2023-07-13 03:16:15,125 INFO [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 67 msec 2023-07-13 03:16:15,142 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,33491,1689218169949-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:15,143 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,33491,1689218169949-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:15,143 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,33491,1689218169949-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:15,145 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase20:33491, period=300000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:15,145 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:15,155 DEBUG [master/jenkins-hbase20:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-13 03:16:15,168 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-13 03:16:15,170 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-13 03:16:15,181 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-13 03:16:15,184 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-13 03:16:15,187 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-13 03:16:15,210 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/hbase/namespace/f819c5469435fdc78753bc4f41cd4d89 2023-07-13 03:16:15,213 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/hbase/namespace/f819c5469435fdc78753bc4f41cd4d89 empty. 2023-07-13 03:16:15,216 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/hbase/namespace/f819c5469435fdc78753bc4f41cd4d89 2023-07-13 03:16:15,216 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-13 03:16:15,274 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-13 03:16:15,277 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => f819c5469435fdc78753bc4f41cd4d89, NAME => 'hbase:namespace,,1689218175169.f819c5469435fdc78753bc4f41cd4d89.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp 2023-07-13 03:16:15,378 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase20.apache.org,33491,1689218169949] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-13 03:16:15,394 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase20.apache.org,33491,1689218169949] procedure2.ProcedureExecutor(1029): Stored pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-13 03:16:15,397 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-13 03:16:15,399 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-13 03:16:15,403 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/hbase/rsgroup/7c4e74675a07c3fb9472d5b7eb467f88 2023-07-13 03:16:15,404 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/hbase/rsgroup/7c4e74675a07c3fb9472d5b7eb467f88 empty. 2023-07-13 03:16:15,405 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/hbase/rsgroup/7c4e74675a07c3fb9472d5b7eb467f88 2023-07-13 03:16:15,405 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-13 03:16:15,462 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689218175169.f819c5469435fdc78753bc4f41cd4d89.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 03:16:15,462 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing f819c5469435fdc78753bc4f41cd4d89, disabling compactions & flushes 2023-07-13 03:16:15,465 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689218175169.f819c5469435fdc78753bc4f41cd4d89. 2023-07-13 03:16:15,465 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689218175169.f819c5469435fdc78753bc4f41cd4d89. 2023-07-13 03:16:15,465 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689218175169.f819c5469435fdc78753bc4f41cd4d89. after waiting 0 ms 2023-07-13 03:16:15,465 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689218175169.f819c5469435fdc78753bc4f41cd4d89. 2023-07-13 03:16:15,466 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689218175169.f819c5469435fdc78753bc4f41cd4d89. 2023-07-13 03:16:15,466 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for f819c5469435fdc78753bc4f41cd4d89: 2023-07-13 03:16:15,478 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-13 03:16:15,478 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-13 03:16:15,480 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 7c4e74675a07c3fb9472d5b7eb467f88, NAME => 'hbase:rsgroup,,1689218175378.7c4e74675a07c3fb9472d5b7eb467f88.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp 2023-07-13 03:16:15,502 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689218175169.f819c5469435fdc78753bc4f41cd4d89.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689218175481"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689218175481"}]},"ts":"1689218175481"} 2023-07-13 03:16:15,507 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689218175378.7c4e74675a07c3fb9472d5b7eb467f88.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 03:16:15,508 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing 7c4e74675a07c3fb9472d5b7eb467f88, disabling compactions & flushes 2023-07-13 03:16:15,509 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689218175378.7c4e74675a07c3fb9472d5b7eb467f88. 2023-07-13 03:16:15,509 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689218175378.7c4e74675a07c3fb9472d5b7eb467f88. 2023-07-13 03:16:15,509 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689218175378.7c4e74675a07c3fb9472d5b7eb467f88. after waiting 0 ms 2023-07-13 03:16:15,509 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689218175378.7c4e74675a07c3fb9472d5b7eb467f88. 2023-07-13 03:16:15,509 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689218175378.7c4e74675a07c3fb9472d5b7eb467f88. 2023-07-13 03:16:15,509 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for 7c4e74675a07c3fb9472d5b7eb467f88: 2023-07-13 03:16:15,518 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-13 03:16:15,519 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689218175378.7c4e74675a07c3fb9472d5b7eb467f88.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689218175519"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689218175519"}]},"ts":"1689218175519"} 2023-07-13 03:16:15,551 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-13 03:16:15,555 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-13 03:16:15,556 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-13 03:16:15,557 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-13 03:16:15,560 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689218175555"}]},"ts":"1689218175555"} 2023-07-13 03:16:15,560 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689218175558"}]},"ts":"1689218175558"} 2023-07-13 03:16:15,564 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-13 03:16:15,566 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-13 03:16:15,568 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-13 03:16:15,568 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-13 03:16:15,568 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-13 03:16:15,568 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-13 03:16:15,568 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-13 03:16:15,570 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-13 03:16:15,571 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-13 03:16:15,571 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-13 03:16:15,571 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-13 03:16:15,571 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-13 03:16:15,571 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=f819c5469435fdc78753bc4f41cd4d89, ASSIGN}] 2023-07-13 03:16:15,571 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=7c4e74675a07c3fb9472d5b7eb467f88, ASSIGN}] 2023-07-13 03:16:15,577 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=7c4e74675a07c3fb9472d5b7eb467f88, ASSIGN 2023-07-13 03:16:15,577 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=f819c5469435fdc78753bc4f41cd4d89, ASSIGN 2023-07-13 03:16:15,579 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=7c4e74675a07c3fb9472d5b7eb467f88, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,37181,1689218172183; forceNewPlan=false, retain=false 2023-07-13 03:16:15,579 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=f819c5469435fdc78753bc4f41cd4d89, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,37181,1689218172183; forceNewPlan=false, retain=false 2023-07-13 03:16:15,580 INFO [jenkins-hbase20:33491] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-13 03:16:15,583 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=7c4e74675a07c3fb9472d5b7eb467f88, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,37181,1689218172183 2023-07-13 03:16:15,583 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=f819c5469435fdc78753bc4f41cd4d89, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,37181,1689218172183 2023-07-13 03:16:15,584 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689218175378.7c4e74675a07c3fb9472d5b7eb467f88.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689218175583"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689218175583"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689218175583"}]},"ts":"1689218175583"} 2023-07-13 03:16:15,584 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689218175169.f819c5469435fdc78753bc4f41cd4d89.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689218175583"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689218175583"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689218175583"}]},"ts":"1689218175583"} 2023-07-13 03:16:15,589 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=7, state=RUNNABLE; OpenRegionProcedure 7c4e74675a07c3fb9472d5b7eb467f88, server=jenkins-hbase20.apache.org,37181,1689218172183}] 2023-07-13 03:16:15,591 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=6, state=RUNNABLE; OpenRegionProcedure f819c5469435fdc78753bc4f41cd4d89, server=jenkins-hbase20.apache.org,37181,1689218172183}] 2023-07-13 03:16:15,748 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689218175378.7c4e74675a07c3fb9472d5b7eb467f88. 2023-07-13 03:16:15,748 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 7c4e74675a07c3fb9472d5b7eb467f88, NAME => 'hbase:rsgroup,,1689218175378.7c4e74675a07c3fb9472d5b7eb467f88.', STARTKEY => '', ENDKEY => ''} 2023-07-13 03:16:15,749 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-13 03:16:15,749 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689218175378.7c4e74675a07c3fb9472d5b7eb467f88. service=MultiRowMutationService 2023-07-13 03:16:15,750 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-13 03:16:15,751 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 7c4e74675a07c3fb9472d5b7eb467f88 2023-07-13 03:16:15,751 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689218175378.7c4e74675a07c3fb9472d5b7eb467f88.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 03:16:15,751 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 7c4e74675a07c3fb9472d5b7eb467f88 2023-07-13 03:16:15,751 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 7c4e74675a07c3fb9472d5b7eb467f88 2023-07-13 03:16:15,755 INFO [StoreOpener-7c4e74675a07c3fb9472d5b7eb467f88-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 7c4e74675a07c3fb9472d5b7eb467f88 2023-07-13 03:16:15,758 DEBUG [StoreOpener-7c4e74675a07c3fb9472d5b7eb467f88-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/rsgroup/7c4e74675a07c3fb9472d5b7eb467f88/m 2023-07-13 03:16:15,758 DEBUG [StoreOpener-7c4e74675a07c3fb9472d5b7eb467f88-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/rsgroup/7c4e74675a07c3fb9472d5b7eb467f88/m 2023-07-13 03:16:15,758 INFO [StoreOpener-7c4e74675a07c3fb9472d5b7eb467f88-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 7c4e74675a07c3fb9472d5b7eb467f88 columnFamilyName m 2023-07-13 03:16:15,760 INFO [StoreOpener-7c4e74675a07c3fb9472d5b7eb467f88-1] regionserver.HStore(310): Store=7c4e74675a07c3fb9472d5b7eb467f88/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 03:16:15,761 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/rsgroup/7c4e74675a07c3fb9472d5b7eb467f88 2023-07-13 03:16:15,763 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/rsgroup/7c4e74675a07c3fb9472d5b7eb467f88 2023-07-13 03:16:15,768 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 7c4e74675a07c3fb9472d5b7eb467f88 2023-07-13 03:16:15,774 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/rsgroup/7c4e74675a07c3fb9472d5b7eb467f88/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 03:16:15,775 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 7c4e74675a07c3fb9472d5b7eb467f88; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@259031aa, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 03:16:15,775 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 7c4e74675a07c3fb9472d5b7eb467f88: 2023-07-13 03:16:15,777 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689218175378.7c4e74675a07c3fb9472d5b7eb467f88., pid=8, masterSystemTime=1689218175742 2023-07-13 03:16:15,781 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689218175378.7c4e74675a07c3fb9472d5b7eb467f88. 2023-07-13 03:16:15,781 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689218175378.7c4e74675a07c3fb9472d5b7eb467f88. 2023-07-13 03:16:15,781 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689218175169.f819c5469435fdc78753bc4f41cd4d89. 2023-07-13 03:16:15,781 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f819c5469435fdc78753bc4f41cd4d89, NAME => 'hbase:namespace,,1689218175169.f819c5469435fdc78753bc4f41cd4d89.', STARTKEY => '', ENDKEY => ''} 2023-07-13 03:16:15,782 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace f819c5469435fdc78753bc4f41cd4d89 2023-07-13 03:16:15,782 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689218175169.f819c5469435fdc78753bc4f41cd4d89.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 03:16:15,782 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for f819c5469435fdc78753bc4f41cd4d89 2023-07-13 03:16:15,782 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for f819c5469435fdc78753bc4f41cd4d89 2023-07-13 03:16:15,783 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=7c4e74675a07c3fb9472d5b7eb467f88, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,37181,1689218172183 2023-07-13 03:16:15,783 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689218175378.7c4e74675a07c3fb9472d5b7eb467f88.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689218175782"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689218175782"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689218175782"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689218175782"}]},"ts":"1689218175782"} 2023-07-13 03:16:15,785 INFO [StoreOpener-f819c5469435fdc78753bc4f41cd4d89-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region f819c5469435fdc78753bc4f41cd4d89 2023-07-13 03:16:15,790 DEBUG [StoreOpener-f819c5469435fdc78753bc4f41cd4d89-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/namespace/f819c5469435fdc78753bc4f41cd4d89/info 2023-07-13 03:16:15,790 DEBUG [StoreOpener-f819c5469435fdc78753bc4f41cd4d89-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/namespace/f819c5469435fdc78753bc4f41cd4d89/info 2023-07-13 03:16:15,791 INFO [StoreOpener-f819c5469435fdc78753bc4f41cd4d89-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f819c5469435fdc78753bc4f41cd4d89 columnFamilyName info 2023-07-13 03:16:15,792 INFO [StoreOpener-f819c5469435fdc78753bc4f41cd4d89-1] regionserver.HStore(310): Store=f819c5469435fdc78753bc4f41cd4d89/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 03:16:15,793 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=7 2023-07-13 03:16:15,795 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=7, state=SUCCESS; OpenRegionProcedure 7c4e74675a07c3fb9472d5b7eb467f88, server=jenkins-hbase20.apache.org,37181,1689218172183 in 199 msec 2023-07-13 03:16:15,796 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/namespace/f819c5469435fdc78753bc4f41cd4d89 2023-07-13 03:16:15,798 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/namespace/f819c5469435fdc78753bc4f41cd4d89 2023-07-13 03:16:15,803 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=5 2023-07-13 03:16:15,803 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=5, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=7c4e74675a07c3fb9472d5b7eb467f88, ASSIGN in 224 msec 2023-07-13 03:16:15,809 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-13 03:16:15,809 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for f819c5469435fdc78753bc4f41cd4d89 2023-07-13 03:16:15,809 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689218175809"}]},"ts":"1689218175809"} 2023-07-13 03:16:15,815 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-13 03:16:15,817 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/namespace/f819c5469435fdc78753bc4f41cd4d89/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 03:16:15,818 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened f819c5469435fdc78753bc4f41cd4d89; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9893023840, jitterRate=-0.07864035665988922}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 03:16:15,818 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for f819c5469435fdc78753bc4f41cd4d89: 2023-07-13 03:16:15,819 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-13 03:16:15,819 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689218175169.f819c5469435fdc78753bc4f41cd4d89., pid=9, masterSystemTime=1689218175742 2023-07-13 03:16:15,822 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689218175169.f819c5469435fdc78753bc4f41cd4d89. 2023-07-13 03:16:15,823 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689218175169.f819c5469435fdc78753bc4f41cd4d89. 2023-07-13 03:16:15,823 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=5, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 440 msec 2023-07-13 03:16:15,825 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=f819c5469435fdc78753bc4f41cd4d89, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,37181,1689218172183 2023-07-13 03:16:15,826 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689218175169.f819c5469435fdc78753bc4f41cd4d89.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689218175825"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689218175825"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689218175825"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689218175825"}]},"ts":"1689218175825"} 2023-07-13 03:16:15,834 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=6 2023-07-13 03:16:15,834 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=6, state=SUCCESS; OpenRegionProcedure f819c5469435fdc78753bc4f41cd4d89, server=jenkins-hbase20.apache.org,37181,1689218172183 in 239 msec 2023-07-13 03:16:15,837 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=4 2023-07-13 03:16:15,838 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=f819c5469435fdc78753bc4f41cd4d89, ASSIGN in 263 msec 2023-07-13 03:16:15,840 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-13 03:16:15,840 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689218175840"}]},"ts":"1689218175840"} 2023-07-13 03:16:15,842 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-13 03:16:15,856 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-13 03:16:15,861 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 685 msec 2023-07-13 03:16:15,885 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33491-0x1008454350d0000, quorum=127.0.0.1:56998, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-13 03:16:15,886 DEBUG [Listener at localhost.localdomain/36261-EventThread] zookeeper.ZKWatcher(600): master:33491-0x1008454350d0000, quorum=127.0.0.1:56998, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-13 03:16:15,886 DEBUG [Listener at localhost.localdomain/36261-EventThread] zookeeper.ZKWatcher(600): master:33491-0x1008454350d0000, quorum=127.0.0.1:56998, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 03:16:15,922 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase20.apache.org,33491,1689218169949] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-13 03:16:15,922 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase20.apache.org,33491,1689218169949] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-13 03:16:15,936 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-13 03:16:15,960 DEBUG [Listener at localhost.localdomain/36261-EventThread] zookeeper.ZKWatcher(600): master:33491-0x1008454350d0000, quorum=127.0.0.1:56998, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-13 03:16:15,973 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 44 msec 2023-07-13 03:16:15,976 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-13 03:16:15,991 DEBUG [Listener at localhost.localdomain/36261-EventThread] zookeeper.ZKWatcher(600): master:33491-0x1008454350d0000, quorum=127.0.0.1:56998, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-13 03:16:15,996 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 19 msec 2023-07-13 03:16:16,006 DEBUG [Listener at localhost.localdomain/36261-EventThread] zookeeper.ZKWatcher(600): master:33491-0x1008454350d0000, quorum=127.0.0.1:56998, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-13 03:16:16,007 DEBUG [Listener at localhost.localdomain/36261-EventThread] zookeeper.ZKWatcher(600): master:33491-0x1008454350d0000, quorum=127.0.0.1:56998, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-13 03:16:16,008 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 2.867sec 2023-07-13 03:16:16,012 DEBUG [Listener at localhost.localdomain/36261-EventThread] zookeeper.ZKWatcher(600): master:33491-0x1008454350d0000, quorum=127.0.0.1:56998, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 03:16:16,012 INFO [master/jenkins-hbase20:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-13 03:16:16,012 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase20.apache.org,33491,1689218169949] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:16,014 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase20.apache.org,33491,1689218169949] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-13 03:16:16,014 INFO [master/jenkins-hbase20:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-13 03:16:16,015 INFO [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-13 03:16:16,017 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,33491,1689218169949-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-13 03:16:16,018 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,33491,1689218169949-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-13 03:16:16,023 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase20.apache.org,33491,1689218169949] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-13 03:16:16,029 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-13 03:16:16,051 DEBUG [Listener at localhost.localdomain/36261] zookeeper.ReadOnlyZKClient(139): Connect 0x62fd740f to 127.0.0.1:56998 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-13 03:16:16,057 DEBUG [Listener at localhost.localdomain/36261] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@41587bac, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-13 03:16:16,075 DEBUG [hconnection-0x2b42746c-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-13 03:16:16,089 INFO [RS-EventLoopGroup-3-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:51396, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-13 03:16:16,099 INFO [Listener at localhost.localdomain/36261] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase20.apache.org,33491,1689218169949 2023-07-13 03:16:16,100 INFO [Listener at localhost.localdomain/36261] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 03:16:16,110 DEBUG [Listener at localhost.localdomain/36261] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-13 03:16:16,114 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:45566, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-13 03:16:16,128 DEBUG [Listener at localhost.localdomain/36261-EventThread] zookeeper.ZKWatcher(600): master:33491-0x1008454350d0000, quorum=127.0.0.1:56998, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-13 03:16:16,129 DEBUG [Listener at localhost.localdomain/36261-EventThread] zookeeper.ZKWatcher(600): master:33491-0x1008454350d0000, quorum=127.0.0.1:56998, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 03:16:16,129 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(492): Client=jenkins//148.251.75.209 set balanceSwitch=false 2023-07-13 03:16:16,134 DEBUG [Listener at localhost.localdomain/36261] zookeeper.ReadOnlyZKClient(139): Connect 0x5386d4f3 to 127.0.0.1:56998 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-13 03:16:16,144 DEBUG [Listener at localhost.localdomain/36261] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4d2f15cb, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-13 03:16:16,144 INFO [Listener at localhost.localdomain/36261] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:56998 2023-07-13 03:16:16,158 DEBUG [Listener at localhost.localdomain/36261-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:56998, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-13 03:16:16,161 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x1008454350d000a connected 2023-07-13 03:16:16,188 INFO [Listener at localhost.localdomain/36261] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testTableMoveTruncateAndDrop Thread=416, OpenFileDescriptor=666, MaxFileDescriptor=60000, SystemLoadAverage=540, ProcessCount=173, AvailableMemoryMB=3926 2023-07-13 03:16:16,191 INFO [Listener at localhost.localdomain/36261] rsgroup.TestRSGroupsBase(132): testTableMoveTruncateAndDrop 2023-07-13 03:16:16,216 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:16,218 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:16,258 INFO [Listener at localhost.localdomain/36261] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-13 03:16:16,275 INFO [Listener at localhost.localdomain/36261] client.ConnectionUtils(127): regionserver/jenkins-hbase20:0 server-side Connection retries=45 2023-07-13 03:16:16,276 INFO [Listener at localhost.localdomain/36261] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 03:16:16,276 INFO [Listener at localhost.localdomain/36261] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-13 03:16:16,276 INFO [Listener at localhost.localdomain/36261] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-13 03:16:16,276 INFO [Listener at localhost.localdomain/36261] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 03:16:16,276 INFO [Listener at localhost.localdomain/36261] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-13 03:16:16,276 INFO [Listener at localhost.localdomain/36261] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-13 03:16:16,281 INFO [Listener at localhost.localdomain/36261] ipc.NettyRpcServer(120): Bind to /148.251.75.209:44325 2023-07-13 03:16:16,281 INFO [Listener at localhost.localdomain/36261] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-13 03:16:16,286 DEBUG [Listener at localhost.localdomain/36261] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-13 03:16:16,288 INFO [Listener at localhost.localdomain/36261] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 03:16:16,293 INFO [Listener at localhost.localdomain/36261] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 03:16:16,298 INFO [Listener at localhost.localdomain/36261] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:44325 connecting to ZooKeeper ensemble=127.0.0.1:56998 2023-07-13 03:16:16,310 DEBUG [Listener at localhost.localdomain/36261-EventThread] zookeeper.ZKWatcher(600): regionserver:443250x0, quorum=127.0.0.1:56998, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-13 03:16:16,313 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:44325-0x1008454350d000b connected 2023-07-13 03:16:16,313 DEBUG [Listener at localhost.localdomain/36261] zookeeper.ZKUtil(162): regionserver:44325-0x1008454350d000b, quorum=127.0.0.1:56998, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-13 03:16:16,314 DEBUG [Listener at localhost.localdomain/36261] zookeeper.ZKUtil(162): regionserver:44325-0x1008454350d000b, quorum=127.0.0.1:56998, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-13 03:16:16,315 DEBUG [Listener at localhost.localdomain/36261] zookeeper.ZKUtil(164): regionserver:44325-0x1008454350d000b, quorum=127.0.0.1:56998, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-13 03:16:16,316 DEBUG [Listener at localhost.localdomain/36261] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=44325 2023-07-13 03:16:16,316 DEBUG [Listener at localhost.localdomain/36261] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=44325 2023-07-13 03:16:16,316 DEBUG [Listener at localhost.localdomain/36261] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=44325 2023-07-13 03:16:16,317 DEBUG [Listener at localhost.localdomain/36261] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=44325 2023-07-13 03:16:16,317 DEBUG [Listener at localhost.localdomain/36261] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=44325 2023-07-13 03:16:16,319 INFO [Listener at localhost.localdomain/36261] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-13 03:16:16,319 INFO [Listener at localhost.localdomain/36261] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-13 03:16:16,319 INFO [Listener at localhost.localdomain/36261] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-13 03:16:16,320 INFO [Listener at localhost.localdomain/36261] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-13 03:16:16,320 INFO [Listener at localhost.localdomain/36261] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-13 03:16:16,320 INFO [Listener at localhost.localdomain/36261] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-13 03:16:16,320 INFO [Listener at localhost.localdomain/36261] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-13 03:16:16,321 INFO [Listener at localhost.localdomain/36261] http.HttpServer(1146): Jetty bound to port 46337 2023-07-13 03:16:16,321 INFO [Listener at localhost.localdomain/36261] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-13 03:16:16,322 INFO [Listener at localhost.localdomain/36261] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 03:16:16,323 INFO [Listener at localhost.localdomain/36261] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@3120b24b{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9a468761-9605-9fc9-5826-02909870e5fb/hadoop.log.dir/,AVAILABLE} 2023-07-13 03:16:16,323 INFO [Listener at localhost.localdomain/36261] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 03:16:16,323 INFO [Listener at localhost.localdomain/36261] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6149e6b5{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-13 03:16:16,447 INFO [Listener at localhost.localdomain/36261] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-13 03:16:16,448 INFO [Listener at localhost.localdomain/36261] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-13 03:16:16,448 INFO [Listener at localhost.localdomain/36261] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-13 03:16:16,449 INFO [Listener at localhost.localdomain/36261] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-13 03:16:16,450 INFO [Listener at localhost.localdomain/36261] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 03:16:16,451 INFO [Listener at localhost.localdomain/36261] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@26dddac4{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9a468761-9605-9fc9-5826-02909870e5fb/java.io.tmpdir/jetty-0_0_0_0-46337-hbase-server-2_4_18-SNAPSHOT_jar-_-any-7120408902811721630/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-13 03:16:16,453 INFO [Listener at localhost.localdomain/36261] server.AbstractConnector(333): Started ServerConnector@171b7e62{HTTP/1.1, (http/1.1)}{0.0.0.0:46337} 2023-07-13 03:16:16,453 INFO [Listener at localhost.localdomain/36261] server.Server(415): Started @11985ms 2023-07-13 03:16:16,462 INFO [RS:3;jenkins-hbase20:44325] regionserver.HRegionServer(951): ClusterId : 5fc5ba14-f68f-4a86-a6cc-9cf3f3ea9add 2023-07-13 03:16:16,463 DEBUG [RS:3;jenkins-hbase20:44325] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-13 03:16:16,466 DEBUG [RS:3;jenkins-hbase20:44325] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-13 03:16:16,466 DEBUG [RS:3;jenkins-hbase20:44325] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-13 03:16:16,468 DEBUG [RS:3;jenkins-hbase20:44325] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-13 03:16:16,470 DEBUG [RS:3;jenkins-hbase20:44325] zookeeper.ReadOnlyZKClient(139): Connect 0x485d8eb9 to 127.0.0.1:56998 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-13 03:16:16,481 DEBUG [RS:3;jenkins-hbase20:44325] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@42a5e5c9, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-13 03:16:16,482 DEBUG [RS:3;jenkins-hbase20:44325] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7a763f9, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-07-13 03:16:16,491 DEBUG [RS:3;jenkins-hbase20:44325] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase20:44325 2023-07-13 03:16:16,491 INFO [RS:3;jenkins-hbase20:44325] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-13 03:16:16,491 INFO [RS:3;jenkins-hbase20:44325] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-13 03:16:16,491 DEBUG [RS:3;jenkins-hbase20:44325] regionserver.HRegionServer(1022): About to register with Master. 2023-07-13 03:16:16,492 INFO [RS:3;jenkins-hbase20:44325] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase20.apache.org,33491,1689218169949 with isa=jenkins-hbase20.apache.org/148.251.75.209:44325, startcode=1689218176275 2023-07-13 03:16:16,492 DEBUG [RS:3;jenkins-hbase20:44325] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-13 03:16:16,498 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:55133, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.3 (auth:SIMPLE), service=RegionServerStatusService 2023-07-13 03:16:16,506 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=33491] master.ServerManager(394): Registering regionserver=jenkins-hbase20.apache.org,44325,1689218176275 2023-07-13 03:16:16,506 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,33491,1689218169949] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-13 03:16:16,508 DEBUG [RS:3;jenkins-hbase20:44325] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692 2023-07-13 03:16:16,508 DEBUG [RS:3;jenkins-hbase20:44325] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:34135 2023-07-13 03:16:16,508 DEBUG [RS:3;jenkins-hbase20:44325] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=33253 2023-07-13 03:16:16,513 DEBUG [Listener at localhost.localdomain/36261-EventThread] zookeeper.ZKWatcher(600): regionserver:37181-0x1008454350d0001, quorum=127.0.0.1:56998, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 03:16:16,513 DEBUG [Listener at localhost.localdomain/36261-EventThread] zookeeper.ZKWatcher(600): master:33491-0x1008454350d0000, quorum=127.0.0.1:56998, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 03:16:16,513 DEBUG [Listener at localhost.localdomain/36261-EventThread] zookeeper.ZKWatcher(600): regionserver:32993-0x1008454350d0003, quorum=127.0.0.1:56998, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 03:16:16,513 DEBUG [Listener at localhost.localdomain/36261-EventThread] zookeeper.ZKWatcher(600): regionserver:44171-0x1008454350d0002, quorum=127.0.0.1:56998, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 03:16:16,515 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase20.apache.org,44325,1689218176275] 2023-07-13 03:16:16,515 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:32993-0x1008454350d0003, quorum=127.0.0.1:56998, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,44171,1689218172445 2023-07-13 03:16:16,515 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37181-0x1008454350d0001, quorum=127.0.0.1:56998, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,44171,1689218172445 2023-07-13 03:16:16,516 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44171-0x1008454350d0002, quorum=127.0.0.1:56998, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,44171,1689218172445 2023-07-13 03:16:16,516 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,33491,1689218169949] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:16,516 DEBUG [RS:3;jenkins-hbase20:44325] zookeeper.ZKUtil(162): regionserver:44325-0x1008454350d000b, quorum=127.0.0.1:56998, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,44325,1689218176275 2023-07-13 03:16:16,516 WARN [RS:3;jenkins-hbase20:44325] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-13 03:16:16,516 INFO [RS:3;jenkins-hbase20:44325] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-13 03:16:16,516 DEBUG [RS:3;jenkins-hbase20:44325] regionserver.HRegionServer(1948): logDir=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/WALs/jenkins-hbase20.apache.org,44325,1689218176275 2023-07-13 03:16:16,516 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:32993-0x1008454350d0003, quorum=127.0.0.1:56998, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,44325,1689218176275 2023-07-13 03:16:16,516 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37181-0x1008454350d0001, quorum=127.0.0.1:56998, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,44325,1689218176275 2023-07-13 03:16:16,517 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44171-0x1008454350d0002, quorum=127.0.0.1:56998, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,44325,1689218176275 2023-07-13 03:16:16,517 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,33491,1689218169949] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-13 03:16:16,517 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:32993-0x1008454350d0003, quorum=127.0.0.1:56998, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,37181,1689218172183 2023-07-13 03:16:16,517 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37181-0x1008454350d0001, quorum=127.0.0.1:56998, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,37181,1689218172183 2023-07-13 03:16:16,517 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44171-0x1008454350d0002, quorum=127.0.0.1:56998, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,37181,1689218172183 2023-07-13 03:16:16,518 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37181-0x1008454350d0001, quorum=127.0.0.1:56998, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,32993,1689218172776 2023-07-13 03:16:16,518 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:32993-0x1008454350d0003, quorum=127.0.0.1:56998, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,32993,1689218172776 2023-07-13 03:16:16,528 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44171-0x1008454350d0002, quorum=127.0.0.1:56998, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,32993,1689218172776 2023-07-13 03:16:16,528 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,33491,1689218169949] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-13 03:16:16,533 DEBUG [RS:3;jenkins-hbase20:44325] zookeeper.ZKUtil(162): regionserver:44325-0x1008454350d000b, quorum=127.0.0.1:56998, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,44171,1689218172445 2023-07-13 03:16:16,534 DEBUG [RS:3;jenkins-hbase20:44325] zookeeper.ZKUtil(162): regionserver:44325-0x1008454350d000b, quorum=127.0.0.1:56998, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,44325,1689218176275 2023-07-13 03:16:16,534 DEBUG [RS:3;jenkins-hbase20:44325] zookeeper.ZKUtil(162): regionserver:44325-0x1008454350d000b, quorum=127.0.0.1:56998, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,37181,1689218172183 2023-07-13 03:16:16,535 DEBUG [RS:3;jenkins-hbase20:44325] zookeeper.ZKUtil(162): regionserver:44325-0x1008454350d000b, quorum=127.0.0.1:56998, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,32993,1689218172776 2023-07-13 03:16:16,536 DEBUG [RS:3;jenkins-hbase20:44325] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-13 03:16:16,536 INFO [RS:3;jenkins-hbase20:44325] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-13 03:16:16,544 INFO [RS:3;jenkins-hbase20:44325] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-13 03:16:16,545 INFO [RS:3;jenkins-hbase20:44325] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-13 03:16:16,545 INFO [RS:3;jenkins-hbase20:44325] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:16,547 INFO [RS:3;jenkins-hbase20:44325] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-13 03:16:16,549 INFO [RS:3;jenkins-hbase20:44325] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:16,550 DEBUG [RS:3;jenkins-hbase20:44325] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:16,550 DEBUG [RS:3;jenkins-hbase20:44325] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:16,550 DEBUG [RS:3;jenkins-hbase20:44325] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:16,550 DEBUG [RS:3;jenkins-hbase20:44325] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:16,550 DEBUG [RS:3;jenkins-hbase20:44325] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:16,550 DEBUG [RS:3;jenkins-hbase20:44325] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-07-13 03:16:16,550 DEBUG [RS:3;jenkins-hbase20:44325] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:16,551 DEBUG [RS:3;jenkins-hbase20:44325] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:16,551 DEBUG [RS:3;jenkins-hbase20:44325] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:16,551 DEBUG [RS:3;jenkins-hbase20:44325] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:16,556 INFO [RS:3;jenkins-hbase20:44325] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:16,556 INFO [RS:3;jenkins-hbase20:44325] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:16,556 INFO [RS:3;jenkins-hbase20:44325] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:16,568 INFO [RS:3;jenkins-hbase20:44325] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-13 03:16:16,568 INFO [RS:3;jenkins-hbase20:44325] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,44325,1689218176275-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:16,582 INFO [RS:3;jenkins-hbase20:44325] regionserver.Replication(203): jenkins-hbase20.apache.org,44325,1689218176275 started 2023-07-13 03:16:16,582 INFO [RS:3;jenkins-hbase20:44325] regionserver.HRegionServer(1637): Serving as jenkins-hbase20.apache.org,44325,1689218176275, RpcServer on jenkins-hbase20.apache.org/148.251.75.209:44325, sessionid=0x1008454350d000b 2023-07-13 03:16:16,583 DEBUG [RS:3;jenkins-hbase20:44325] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-13 03:16:16,583 DEBUG [RS:3;jenkins-hbase20:44325] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase20.apache.org,44325,1689218176275 2023-07-13 03:16:16,583 DEBUG [RS:3;jenkins-hbase20:44325] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,44325,1689218176275' 2023-07-13 03:16:16,583 DEBUG [RS:3;jenkins-hbase20:44325] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-13 03:16:16,583 DEBUG [RS:3;jenkins-hbase20:44325] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-13 03:16:16,584 DEBUG [RS:3;jenkins-hbase20:44325] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-13 03:16:16,584 DEBUG [RS:3;jenkins-hbase20:44325] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-13 03:16:16,584 DEBUG [RS:3;jenkins-hbase20:44325] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase20.apache.org,44325,1689218176275 2023-07-13 03:16:16,584 DEBUG [RS:3;jenkins-hbase20:44325] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,44325,1689218176275' 2023-07-13 03:16:16,584 DEBUG [RS:3;jenkins-hbase20:44325] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-13 03:16:16,584 DEBUG [RS:3;jenkins-hbase20:44325] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-13 03:16:16,585 DEBUG [RS:3;jenkins-hbase20:44325] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-13 03:16:16,585 INFO [RS:3;jenkins-hbase20:44325] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-13 03:16:16,585 INFO [RS:3;jenkins-hbase20:44325] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-13 03:16:16,589 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-13 03:16:16,594 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:16,595 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 03:16:16,596 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 03:16:16,600 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 03:16:16,603 DEBUG [hconnection-0x2cd1b0c2-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-13 03:16:16,607 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:51410, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-13 03:16:16,616 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:16,616 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:16,626 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:33491] to rsgroup master 2023-07-13 03:16:16,626 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33491 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 03:16:16,627 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] ipc.CallRunner(144): callId: 20 service: MasterService methodName: ExecMasterService size: 119 connection: 148.251.75.209:45566 deadline: 1689219376625, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33491 is either offline or it does not exist. 2023-07-13 03:16:16,627 WARN [Listener at localhost.localdomain/36261] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33491 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33491 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-13 03:16:16,630 INFO [Listener at localhost.localdomain/36261] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 03:16:16,631 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:16,632 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:16,633 INFO [Listener at localhost.localdomain/36261] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:32993, jenkins-hbase20.apache.org:37181, jenkins-hbase20.apache.org:44171, jenkins-hbase20.apache.org:44325], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-13 03:16:16,639 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-13 03:16:16,639 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 03:16:16,642 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-13 03:16:16,642 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 03:16:16,644 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup Group_testTableMoveTruncateAndDrop_1739069322 2023-07-13 03:16:16,648 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:16,648 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 03:16:16,650 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1739069322 2023-07-13 03:16:16,651 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 03:16:16,665 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 03:16:16,670 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:16,671 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:16,676 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:32993, jenkins-hbase20.apache.org:37181] to rsgroup Group_testTableMoveTruncateAndDrop_1739069322 2023-07-13 03:16:16,680 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:16,681 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1739069322 2023-07-13 03:16:16,681 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 03:16:16,682 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 03:16:16,686 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(238): Moving server region f819c5469435fdc78753bc4f41cd4d89, which do not belong to RSGroup Group_testTableMoveTruncateAndDrop_1739069322 2023-07-13 03:16:16,687 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-13 03:16:16,687 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-13 03:16:16,687 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-13 03:16:16,687 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-13 03:16:16,687 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-13 03:16:16,688 INFO [RS:3;jenkins-hbase20:44325] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C44325%2C1689218176275, suffix=, logDir=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/WALs/jenkins-hbase20.apache.org,44325,1689218176275, archiveDir=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/oldWALs, maxLogs=32 2023-07-13 03:16:16,695 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=f819c5469435fdc78753bc4f41cd4d89, REOPEN/MOVE 2023-07-13 03:16:16,696 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=f819c5469435fdc78753bc4f41cd4d89, REOPEN/MOVE 2023-07-13 03:16:16,699 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=f819c5469435fdc78753bc4f41cd4d89, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,37181,1689218172183 2023-07-13 03:16:16,699 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(238): Moving server region 7c4e74675a07c3fb9472d5b7eb467f88, which do not belong to RSGroup Group_testTableMoveTruncateAndDrop_1739069322 2023-07-13 03:16:16,699 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689218175169.f819c5469435fdc78753bc4f41cd4d89.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689218176698"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689218176698"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689218176698"}]},"ts":"1689218176698"} 2023-07-13 03:16:16,699 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-13 03:16:16,699 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-13 03:16:16,699 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-13 03:16:16,700 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-13 03:16:16,700 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-13 03:16:16,703 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=12, state=RUNNABLE; CloseRegionProcedure f819c5469435fdc78753bc4f41cd4d89, server=jenkins-hbase20.apache.org,37181,1689218172183}] 2023-07-13 03:16:16,715 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] procedure2.ProcedureExecutor(1029): Stored pid=13, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=7c4e74675a07c3fb9472d5b7eb467f88, REOPEN/MOVE 2023-07-13 03:16:16,716 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(238): Moving server region 1588230740, which do not belong to RSGroup Group_testTableMoveTruncateAndDrop_1739069322 2023-07-13 03:16:16,716 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-13 03:16:16,716 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-13 03:16:16,716 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-13 03:16:16,716 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-13 03:16:16,716 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-13 03:16:16,718 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] procedure2.ProcedureExecutor(1029): Stored pid=15, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-13 03:16:16,718 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(286): Moving 3 region(s) to group default, current retry=0 2023-07-13 03:16:16,723 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=7c4e74675a07c3fb9472d5b7eb467f88, REOPEN/MOVE 2023-07-13 03:16:16,724 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=15, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-13 03:16:16,728 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=7c4e74675a07c3fb9472d5b7eb467f88, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,37181,1689218172183 2023-07-13 03:16:16,728 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689218175378.7c4e74675a07c3fb9472d5b7eb467f88.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689218176728"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689218176728"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689218176728"}]},"ts":"1689218176728"} 2023-07-13 03:16:16,732 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,37181,1689218172183, state=CLOSING 2023-07-13 03:16:16,737 DEBUG [Listener at localhost.localdomain/36261-EventThread] zookeeper.ZKWatcher(600): master:33491-0x1008454350d0000, quorum=127.0.0.1:56998, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-13 03:16:16,737 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-13 03:16:16,737 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=16, ppid=15, state=RUNNABLE; CloseRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,37181,1689218172183}] 2023-07-13 03:16:16,740 DEBUG [RS-EventLoopGroup-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37299,DS-7325d1d6-f32c-4e9f-9d47-b89ecc0dcb96,DISK] 2023-07-13 03:16:16,743 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=13, state=RUNNABLE; CloseRegionProcedure 7c4e74675a07c3fb9472d5b7eb467f88, server=jenkins-hbase20.apache.org,37181,1689218172183}] 2023-07-13 03:16:16,749 DEBUG [RS-EventLoopGroup-7-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43409,DS-f2641e55-6772-43f9-9084-b6bc41af5cda,DISK] 2023-07-13 03:16:16,750 DEBUG [RS-EventLoopGroup-7-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43963,DS-72306ca7-9fea-43e7-ac6c-a3e6f88d5ecf,DISK] 2023-07-13 03:16:16,752 DEBUG [PEWorker-4] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=17, ppid=13, state=RUNNABLE; CloseRegionProcedure 7c4e74675a07c3fb9472d5b7eb467f88, server=jenkins-hbase20.apache.org,37181,1689218172183 2023-07-13 03:16:16,759 INFO [RS:3;jenkins-hbase20:44325] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/WALs/jenkins-hbase20.apache.org,44325,1689218176275/jenkins-hbase20.apache.org%2C44325%2C1689218176275.1689218176690 2023-07-13 03:16:16,759 DEBUG [RS:3;jenkins-hbase20:44325] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37299,DS-7325d1d6-f32c-4e9f-9d47-b89ecc0dcb96,DISK], DatanodeInfoWithStorage[127.0.0.1:43963,DS-72306ca7-9fea-43e7-ac6c-a3e6f88d5ecf,DISK], DatanodeInfoWithStorage[127.0.0.1:43409,DS-f2641e55-6772-43f9-9084-b6bc41af5cda,DISK]] 2023-07-13 03:16:16,882 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 1588230740 2023-07-13 03:16:16,882 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close f819c5469435fdc78753bc4f41cd4d89 2023-07-13 03:16:16,883 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-13 03:16:16,884 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing f819c5469435fdc78753bc4f41cd4d89, disabling compactions & flushes 2023-07-13 03:16:16,884 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689218175169.f819c5469435fdc78753bc4f41cd4d89. 2023-07-13 03:16:16,884 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689218175169.f819c5469435fdc78753bc4f41cd4d89. 2023-07-13 03:16:16,884 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689218175169.f819c5469435fdc78753bc4f41cd4d89. after waiting 0 ms 2023-07-13 03:16:16,884 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689218175169.f819c5469435fdc78753bc4f41cd4d89. 2023-07-13 03:16:16,884 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-13 03:16:16,885 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-13 03:16:16,885 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-13 03:16:16,885 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-13 03:16:16,885 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing f819c5469435fdc78753bc4f41cd4d89 1/1 column families, dataSize=78 B heapSize=488 B 2023-07-13 03:16:16,885 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=3.22 KB heapSize=6.16 KB 2023-07-13 03:16:17,045 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/namespace/f819c5469435fdc78753bc4f41cd4d89/.tmp/info/c33c2610b3444bfdbffdeb691e5d65ea 2023-07-13 03:16:17,045 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=3.04 KB at sequenceid=16 (bloomFilter=false), to=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/meta/1588230740/.tmp/info/bcaca02537e54611aed1a2e9a228755c 2023-07-13 03:16:17,105 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/namespace/f819c5469435fdc78753bc4f41cd4d89/.tmp/info/c33c2610b3444bfdbffdeb691e5d65ea as hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/namespace/f819c5469435fdc78753bc4f41cd4d89/info/c33c2610b3444bfdbffdeb691e5d65ea 2023-07-13 03:16:17,122 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/namespace/f819c5469435fdc78753bc4f41cd4d89/info/c33c2610b3444bfdbffdeb691e5d65ea, entries=2, sequenceid=6, filesize=4.8 K 2023-07-13 03:16:17,130 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for f819c5469435fdc78753bc4f41cd4d89 in 245ms, sequenceid=6, compaction requested=false 2023-07-13 03:16:17,136 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-13 03:16:17,158 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=184 B at sequenceid=16 (bloomFilter=false), to=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/meta/1588230740/.tmp/table/804791cb25304cdcb162f6411c6bacb2 2023-07-13 03:16:17,163 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/namespace/f819c5469435fdc78753bc4f41cd4d89/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-07-13 03:16:17,164 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689218175169.f819c5469435fdc78753bc4f41cd4d89. 2023-07-13 03:16:17,164 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for f819c5469435fdc78753bc4f41cd4d89: 2023-07-13 03:16:17,165 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(3513): Adding f819c5469435fdc78753bc4f41cd4d89 move to jenkins-hbase20.apache.org,44325,1689218176275 record at close sequenceid=6 2023-07-13 03:16:17,169 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/meta/1588230740/.tmp/info/bcaca02537e54611aed1a2e9a228755c as hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/meta/1588230740/info/bcaca02537e54611aed1a2e9a228755c 2023-07-13 03:16:17,170 DEBUG [PEWorker-3] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=14, ppid=12, state=RUNNABLE; CloseRegionProcedure f819c5469435fdc78753bc4f41cd4d89, server=jenkins-hbase20.apache.org,37181,1689218172183 2023-07-13 03:16:17,171 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed f819c5469435fdc78753bc4f41cd4d89 2023-07-13 03:16:17,180 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/meta/1588230740/info/bcaca02537e54611aed1a2e9a228755c, entries=22, sequenceid=16, filesize=7.3 K 2023-07-13 03:16:17,184 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/meta/1588230740/.tmp/table/804791cb25304cdcb162f6411c6bacb2 as hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/meta/1588230740/table/804791cb25304cdcb162f6411c6bacb2 2023-07-13 03:16:17,202 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/meta/1588230740/table/804791cb25304cdcb162f6411c6bacb2, entries=4, sequenceid=16, filesize=4.8 K 2023-07-13 03:16:17,205 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~3.22 KB/3296, heapSize ~5.88 KB/6024, currentSize=0 B/0 for 1588230740 in 320ms, sequenceid=16, compaction requested=false 2023-07-13 03:16:17,205 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-13 03:16:17,227 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/meta/1588230740/recovered.edits/19.seqid, newMaxSeqId=19, maxSeqId=1 2023-07-13 03:16:17,228 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-13 03:16:17,229 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-13 03:16:17,229 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-13 03:16:17,229 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(3513): Adding 1588230740 move to jenkins-hbase20.apache.org,44171,1689218172445 record at close sequenceid=16 2023-07-13 03:16:17,232 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 1588230740 2023-07-13 03:16:17,233 WARN [PEWorker-2] zookeeper.MetaTableLocator(225): Tried to set null ServerName in hbase:meta; skipping -- ServerName required 2023-07-13 03:16:17,236 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=16, resume processing ppid=15 2023-07-13 03:16:17,236 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=16, ppid=15, state=SUCCESS; CloseRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,37181,1689218172183 in 496 msec 2023-07-13 03:16:17,237 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase20.apache.org,44171,1689218172445; forceNewPlan=false, retain=false 2023-07-13 03:16:17,388 INFO [jenkins-hbase20:33491] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-13 03:16:17,388 INFO [PEWorker-4] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,44171,1689218172445, state=OPENING 2023-07-13 03:16:17,389 DEBUG [Listener at localhost.localdomain/36261-EventThread] zookeeper.ZKWatcher(600): master:33491-0x1008454350d0000, quorum=127.0.0.1:56998, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-13 03:16:17,390 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-13 03:16:17,390 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=15, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,44171,1689218172445}] 2023-07-13 03:16:17,547 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase20.apache.org,44171,1689218172445 2023-07-13 03:16:17,547 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-13 03:16:17,557 INFO [RS-EventLoopGroup-4-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:60494, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-13 03:16:17,564 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-13 03:16:17,564 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-13 03:16:17,569 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C44171%2C1689218172445.meta, suffix=.meta, logDir=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/WALs/jenkins-hbase20.apache.org,44171,1689218172445, archiveDir=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/oldWALs, maxLogs=32 2023-07-13 03:16:17,608 DEBUG [RS-EventLoopGroup-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43409,DS-f2641e55-6772-43f9-9084-b6bc41af5cda,DISK] 2023-07-13 03:16:17,635 DEBUG [RS-EventLoopGroup-7-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37299,DS-7325d1d6-f32c-4e9f-9d47-b89ecc0dcb96,DISK] 2023-07-13 03:16:17,635 DEBUG [RS-EventLoopGroup-7-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43963,DS-72306ca7-9fea-43e7-ac6c-a3e6f88d5ecf,DISK] 2023-07-13 03:16:17,643 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/WALs/jenkins-hbase20.apache.org,44171,1689218172445/jenkins-hbase20.apache.org%2C44171%2C1689218172445.meta.1689218177570.meta 2023-07-13 03:16:17,643 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43409,DS-f2641e55-6772-43f9-9084-b6bc41af5cda,DISK], DatanodeInfoWithStorage[127.0.0.1:37299,DS-7325d1d6-f32c-4e9f-9d47-b89ecc0dcb96,DISK], DatanodeInfoWithStorage[127.0.0.1:43963,DS-72306ca7-9fea-43e7-ac6c-a3e6f88d5ecf,DISK]] 2023-07-13 03:16:17,643 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-13 03:16:17,644 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-13 03:16:17,644 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-13 03:16:17,644 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-13 03:16:17,644 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-13 03:16:17,644 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 03:16:17,644 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-13 03:16:17,644 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-13 03:16:17,661 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-13 03:16:17,663 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/meta/1588230740/info 2023-07-13 03:16:17,663 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/meta/1588230740/info 2023-07-13 03:16:17,664 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-13 03:16:17,702 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/meta/1588230740/info/bcaca02537e54611aed1a2e9a228755c 2023-07-13 03:16:17,703 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 03:16:17,703 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-13 03:16:17,706 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/meta/1588230740/rep_barrier 2023-07-13 03:16:17,706 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/meta/1588230740/rep_barrier 2023-07-13 03:16:17,707 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-13 03:16:17,708 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 03:16:17,708 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-13 03:16:17,710 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/meta/1588230740/table 2023-07-13 03:16:17,710 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/meta/1588230740/table 2023-07-13 03:16:17,711 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-13 03:16:17,719 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] procedure.ProcedureSyncWait(216): waitFor pid=12 2023-07-13 03:16:17,727 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/meta/1588230740/table/804791cb25304cdcb162f6411c6bacb2 2023-07-13 03:16:17,729 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 03:16:17,731 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/meta/1588230740 2023-07-13 03:16:17,733 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/meta/1588230740 2023-07-13 03:16:17,737 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-13 03:16:17,745 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-13 03:16:17,748 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=20; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9764490400, jitterRate=-0.09061096608638763}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-13 03:16:17,748 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-13 03:16:17,750 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=18, masterSystemTime=1689218177547 2023-07-13 03:16:17,755 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-13 03:16:17,758 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,44171,1689218172445, state=OPEN 2023-07-13 03:16:17,758 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-13 03:16:17,759 DEBUG [Listener at localhost.localdomain/36261-EventThread] zookeeper.ZKWatcher(600): master:33491-0x1008454350d0000, quorum=127.0.0.1:56998, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-13 03:16:17,759 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-13 03:16:17,765 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=f819c5469435fdc78753bc4f41cd4d89, regionState=CLOSED 2023-07-13 03:16:17,765 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:namespace,,1689218175169.f819c5469435fdc78753bc4f41cd4d89.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689218177765"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689218177765"}]},"ts":"1689218177765"} 2023-07-13 03:16:17,766 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=37181] ipc.CallRunner(144): callId: 40 service: ClientService methodName: Mutate size: 217 connection: 148.251.75.209:51378 deadline: 1689218237766, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase20.apache.org port=44171 startCode=1689218172445. As of locationSeqNum=16. 2023-07-13 03:16:17,770 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=15 2023-07-13 03:16:17,770 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=15, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,44171,1689218172445 in 370 msec 2023-07-13 03:16:17,772 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=15, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE in 1.0540 sec 2023-07-13 03:16:17,868 DEBUG [PEWorker-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-13 03:16:17,872 INFO [RS-EventLoopGroup-4-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:60510, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-13 03:16:17,882 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=12 2023-07-13 03:16:17,882 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=12, state=SUCCESS; CloseRegionProcedure f819c5469435fdc78753bc4f41cd4d89, server=jenkins-hbase20.apache.org,37181,1689218172183 in 1.1720 sec 2023-07-13 03:16:17,884 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=f819c5469435fdc78753bc4f41cd4d89, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase20.apache.org,44325,1689218176275; forceNewPlan=false, retain=false 2023-07-13 03:16:17,916 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 7c4e74675a07c3fb9472d5b7eb467f88 2023-07-13 03:16:17,917 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 7c4e74675a07c3fb9472d5b7eb467f88, disabling compactions & flushes 2023-07-13 03:16:17,917 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689218175378.7c4e74675a07c3fb9472d5b7eb467f88. 2023-07-13 03:16:17,917 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689218175378.7c4e74675a07c3fb9472d5b7eb467f88. 2023-07-13 03:16:17,917 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689218175378.7c4e74675a07c3fb9472d5b7eb467f88. after waiting 0 ms 2023-07-13 03:16:17,917 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689218175378.7c4e74675a07c3fb9472d5b7eb467f88. 2023-07-13 03:16:17,918 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing 7c4e74675a07c3fb9472d5b7eb467f88 1/1 column families, dataSize=1.40 KB heapSize=2.40 KB 2023-07-13 03:16:17,956 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.40 KB at sequenceid=9 (bloomFilter=true), to=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/rsgroup/7c4e74675a07c3fb9472d5b7eb467f88/.tmp/m/b15a5711a2e54542bce9c6f3fae93ae6 2023-07-13 03:16:17,994 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/rsgroup/7c4e74675a07c3fb9472d5b7eb467f88/.tmp/m/b15a5711a2e54542bce9c6f3fae93ae6 as hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/rsgroup/7c4e74675a07c3fb9472d5b7eb467f88/m/b15a5711a2e54542bce9c6f3fae93ae6 2023-07-13 03:16:18,009 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/rsgroup/7c4e74675a07c3fb9472d5b7eb467f88/m/b15a5711a2e54542bce9c6f3fae93ae6, entries=3, sequenceid=9, filesize=5.2 K 2023-07-13 03:16:18,011 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.40 KB/1437, heapSize ~2.38 KB/2440, currentSize=0 B/0 for 7c4e74675a07c3fb9472d5b7eb467f88 in 94ms, sequenceid=9, compaction requested=false 2023-07-13 03:16:18,011 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-13 03:16:18,032 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/rsgroup/7c4e74675a07c3fb9472d5b7eb467f88/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=1 2023-07-13 03:16:18,033 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-13 03:16:18,034 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689218175378.7c4e74675a07c3fb9472d5b7eb467f88. 2023-07-13 03:16:18,034 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 7c4e74675a07c3fb9472d5b7eb467f88: 2023-07-13 03:16:18,034 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(3513): Adding 7c4e74675a07c3fb9472d5b7eb467f88 move to jenkins-hbase20.apache.org,44171,1689218172445 record at close sequenceid=9 2023-07-13 03:16:18,034 INFO [jenkins-hbase20:33491] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-13 03:16:18,034 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=f819c5469435fdc78753bc4f41cd4d89, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,44325,1689218176275 2023-07-13 03:16:18,035 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689218175169.f819c5469435fdc78753bc4f41cd4d89.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689218178034"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689218178034"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689218178034"}]},"ts":"1689218178034"} 2023-07-13 03:16:18,037 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 7c4e74675a07c3fb9472d5b7eb467f88 2023-07-13 03:16:18,038 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=19, ppid=12, state=RUNNABLE; OpenRegionProcedure f819c5469435fdc78753bc4f41cd4d89, server=jenkins-hbase20.apache.org,44325,1689218176275}] 2023-07-13 03:16:18,038 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=7c4e74675a07c3fb9472d5b7eb467f88, regionState=CLOSED 2023-07-13 03:16:18,039 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689218175378.7c4e74675a07c3fb9472d5b7eb467f88.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689218178038"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689218178038"}]},"ts":"1689218178038"} 2023-07-13 03:16:18,046 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=13 2023-07-13 03:16:18,046 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=13, state=SUCCESS; CloseRegionProcedure 7c4e74675a07c3fb9472d5b7eb467f88, server=jenkins-hbase20.apache.org,37181,1689218172183 in 1.2980 sec 2023-07-13 03:16:18,047 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=13, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=7c4e74675a07c3fb9472d5b7eb467f88, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase20.apache.org,44171,1689218172445; forceNewPlan=false, retain=false 2023-07-13 03:16:18,194 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase20.apache.org,44325,1689218176275 2023-07-13 03:16:18,194 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-13 03:16:18,198 INFO [jenkins-hbase20:33491] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-13 03:16:18,198 INFO [RS-EventLoopGroup-7-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:45522, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-13 03:16:18,200 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=7c4e74675a07c3fb9472d5b7eb467f88, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,44171,1689218172445 2023-07-13 03:16:18,200 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689218175378.7c4e74675a07c3fb9472d5b7eb467f88.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689218178200"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689218178200"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689218178200"}]},"ts":"1689218178200"} 2023-07-13 03:16:18,204 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=20, ppid=13, state=RUNNABLE; OpenRegionProcedure 7c4e74675a07c3fb9472d5b7eb467f88, server=jenkins-hbase20.apache.org,44171,1689218172445}] 2023-07-13 03:16:18,214 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689218175169.f819c5469435fdc78753bc4f41cd4d89. 2023-07-13 03:16:18,214 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f819c5469435fdc78753bc4f41cd4d89, NAME => 'hbase:namespace,,1689218175169.f819c5469435fdc78753bc4f41cd4d89.', STARTKEY => '', ENDKEY => ''} 2023-07-13 03:16:18,215 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace f819c5469435fdc78753bc4f41cd4d89 2023-07-13 03:16:18,215 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689218175169.f819c5469435fdc78753bc4f41cd4d89.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 03:16:18,215 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for f819c5469435fdc78753bc4f41cd4d89 2023-07-13 03:16:18,215 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for f819c5469435fdc78753bc4f41cd4d89 2023-07-13 03:16:18,218 INFO [StoreOpener-f819c5469435fdc78753bc4f41cd4d89-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region f819c5469435fdc78753bc4f41cd4d89 2023-07-13 03:16:18,219 DEBUG [StoreOpener-f819c5469435fdc78753bc4f41cd4d89-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/namespace/f819c5469435fdc78753bc4f41cd4d89/info 2023-07-13 03:16:18,219 DEBUG [StoreOpener-f819c5469435fdc78753bc4f41cd4d89-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/namespace/f819c5469435fdc78753bc4f41cd4d89/info 2023-07-13 03:16:18,220 INFO [StoreOpener-f819c5469435fdc78753bc4f41cd4d89-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f819c5469435fdc78753bc4f41cd4d89 columnFamilyName info 2023-07-13 03:16:18,236 DEBUG [StoreOpener-f819c5469435fdc78753bc4f41cd4d89-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/namespace/f819c5469435fdc78753bc4f41cd4d89/info/c33c2610b3444bfdbffdeb691e5d65ea 2023-07-13 03:16:18,236 INFO [StoreOpener-f819c5469435fdc78753bc4f41cd4d89-1] regionserver.HStore(310): Store=f819c5469435fdc78753bc4f41cd4d89/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 03:16:18,238 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/namespace/f819c5469435fdc78753bc4f41cd4d89 2023-07-13 03:16:18,241 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/namespace/f819c5469435fdc78753bc4f41cd4d89 2023-07-13 03:16:18,246 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for f819c5469435fdc78753bc4f41cd4d89 2023-07-13 03:16:18,248 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened f819c5469435fdc78753bc4f41cd4d89; next sequenceid=10; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=12004937120, jitterRate=0.11804689466953278}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 03:16:18,248 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for f819c5469435fdc78753bc4f41cd4d89: 2023-07-13 03:16:18,251 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689218175169.f819c5469435fdc78753bc4f41cd4d89., pid=19, masterSystemTime=1689218178193 2023-07-13 03:16:18,257 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689218175169.f819c5469435fdc78753bc4f41cd4d89. 2023-07-13 03:16:18,258 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689218175169.f819c5469435fdc78753bc4f41cd4d89. 2023-07-13 03:16:18,259 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=f819c5469435fdc78753bc4f41cd4d89, regionState=OPEN, openSeqNum=10, regionLocation=jenkins-hbase20.apache.org,44325,1689218176275 2023-07-13 03:16:18,259 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689218175169.f819c5469435fdc78753bc4f41cd4d89.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689218178259"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689218178259"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689218178259"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689218178259"}]},"ts":"1689218178259"} 2023-07-13 03:16:18,267 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=19, resume processing ppid=12 2023-07-13 03:16:18,267 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=19, ppid=12, state=SUCCESS; OpenRegionProcedure f819c5469435fdc78753bc4f41cd4d89, server=jenkins-hbase20.apache.org,44325,1689218176275 in 225 msec 2023-07-13 03:16:18,270 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=f819c5469435fdc78753bc4f41cd4d89, REOPEN/MOVE in 1.5790 sec 2023-07-13 03:16:18,364 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689218175378.7c4e74675a07c3fb9472d5b7eb467f88. 2023-07-13 03:16:18,365 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 7c4e74675a07c3fb9472d5b7eb467f88, NAME => 'hbase:rsgroup,,1689218175378.7c4e74675a07c3fb9472d5b7eb467f88.', STARTKEY => '', ENDKEY => ''} 2023-07-13 03:16:18,365 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-13 03:16:18,365 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689218175378.7c4e74675a07c3fb9472d5b7eb467f88. service=MultiRowMutationService 2023-07-13 03:16:18,365 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-13 03:16:18,366 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 7c4e74675a07c3fb9472d5b7eb467f88 2023-07-13 03:16:18,366 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689218175378.7c4e74675a07c3fb9472d5b7eb467f88.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 03:16:18,366 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 7c4e74675a07c3fb9472d5b7eb467f88 2023-07-13 03:16:18,366 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 7c4e74675a07c3fb9472d5b7eb467f88 2023-07-13 03:16:18,369 INFO [StoreOpener-7c4e74675a07c3fb9472d5b7eb467f88-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 7c4e74675a07c3fb9472d5b7eb467f88 2023-07-13 03:16:18,378 DEBUG [StoreOpener-7c4e74675a07c3fb9472d5b7eb467f88-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/rsgroup/7c4e74675a07c3fb9472d5b7eb467f88/m 2023-07-13 03:16:18,379 DEBUG [StoreOpener-7c4e74675a07c3fb9472d5b7eb467f88-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/rsgroup/7c4e74675a07c3fb9472d5b7eb467f88/m 2023-07-13 03:16:18,379 INFO [StoreOpener-7c4e74675a07c3fb9472d5b7eb467f88-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 7c4e74675a07c3fb9472d5b7eb467f88 columnFamilyName m 2023-07-13 03:16:18,391 DEBUG [StoreOpener-7c4e74675a07c3fb9472d5b7eb467f88-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/rsgroup/7c4e74675a07c3fb9472d5b7eb467f88/m/b15a5711a2e54542bce9c6f3fae93ae6 2023-07-13 03:16:18,391 INFO [StoreOpener-7c4e74675a07c3fb9472d5b7eb467f88-1] regionserver.HStore(310): Store=7c4e74675a07c3fb9472d5b7eb467f88/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 03:16:18,393 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/rsgroup/7c4e74675a07c3fb9472d5b7eb467f88 2023-07-13 03:16:18,396 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/rsgroup/7c4e74675a07c3fb9472d5b7eb467f88 2023-07-13 03:16:18,403 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 7c4e74675a07c3fb9472d5b7eb467f88 2023-07-13 03:16:18,405 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 7c4e74675a07c3fb9472d5b7eb467f88; next sequenceid=13; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@7440da7d, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 03:16:18,405 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 7c4e74675a07c3fb9472d5b7eb467f88: 2023-07-13 03:16:18,406 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689218175378.7c4e74675a07c3fb9472d5b7eb467f88., pid=20, masterSystemTime=1689218178359 2023-07-13 03:16:18,414 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689218175378.7c4e74675a07c3fb9472d5b7eb467f88. 2023-07-13 03:16:18,415 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689218175378.7c4e74675a07c3fb9472d5b7eb467f88. 2023-07-13 03:16:18,415 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=7c4e74675a07c3fb9472d5b7eb467f88, regionState=OPEN, openSeqNum=13, regionLocation=jenkins-hbase20.apache.org,44171,1689218172445 2023-07-13 03:16:18,416 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689218175378.7c4e74675a07c3fb9472d5b7eb467f88.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689218178415"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689218178415"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689218178415"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689218178415"}]},"ts":"1689218178415"} 2023-07-13 03:16:18,425 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=20, resume processing ppid=13 2023-07-13 03:16:18,425 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=20, ppid=13, state=SUCCESS; OpenRegionProcedure 7c4e74675a07c3fb9472d5b7eb467f88, server=jenkins-hbase20.apache.org,44171,1689218172445 in 214 msec 2023-07-13 03:16:18,430 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=13, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=7c4e74675a07c3fb9472d5b7eb467f88, REOPEN/MOVE in 1.7250 sec 2023-07-13 03:16:18,721 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase20.apache.org,32993,1689218172776, jenkins-hbase20.apache.org,37181,1689218172183] are moved back to default 2023-07-13 03:16:18,721 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testTableMoveTruncateAndDrop_1739069322 2023-07-13 03:16:18,721 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-13 03:16:18,723 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=37181] ipc.CallRunner(144): callId: 3 service: ClientService methodName: Scan size: 136 connection: 148.251.75.209:51410 deadline: 1689218238723, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase20.apache.org port=44171 startCode=1689218172445. As of locationSeqNum=9. 2023-07-13 03:16:18,827 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=37181] ipc.CallRunner(144): callId: 4 service: ClientService methodName: Get size: 88 connection: 148.251.75.209:51410 deadline: 1689218238827, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase20.apache.org port=44171 startCode=1689218172445. As of locationSeqNum=16. 2023-07-13 03:16:18,930 DEBUG [hconnection-0x2cd1b0c2-shared-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-13 03:16:18,934 INFO [RS-EventLoopGroup-4-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:45378, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-13 03:16:18,969 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:18,969 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:18,974 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_1739069322 2023-07-13 03:16:18,974 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 03:16:18,988 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.HMaster$4(2112): Client=jenkins//148.251.75.209 create 'Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-13 03:16:18,992 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] procedure2.ProcedureExecutor(1029): Stored pid=21, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-13 03:16:18,995 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=21, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_PRE_OPERATION 2023-07-13 03:16:18,998 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=37181] ipc.CallRunner(144): callId: 50 service: ClientService methodName: ExecService size: 626 connection: 148.251.75.209:51378 deadline: 1689218238998, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase20.apache.org port=44171 startCode=1689218172445. As of locationSeqNum=9. 2023-07-13 03:16:19,000 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(700): Client=jenkins//148.251.75.209 procedure request for creating table: namespace: "default" qualifier: "Group_testTableMoveTruncateAndDrop" procId is: 21 2023-07-13 03:16:19,012 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-13 03:16:19,109 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:19,110 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1739069322 2023-07-13 03:16:19,111 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 03:16:19,112 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 03:16:19,118 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-13 03:16:19,119 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=21, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-13 03:16:19,126 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testTableMoveTruncateAndDrop/43c7234259f6da500759a6e1f628fe78 2023-07-13 03:16:19,126 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testTableMoveTruncateAndDrop/de12356ae110fb148dc5fed11bfe84b7 2023-07-13 03:16:19,126 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testTableMoveTruncateAndDrop/01fead1a7e0c2fc4b6e58d7bbd7db30e 2023-07-13 03:16:19,126 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testTableMoveTruncateAndDrop/78e6979d0d289e5998cdb743fccea0c7 2023-07-13 03:16:19,126 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testTableMoveTruncateAndDrop/83f7e87289c2c60762ebf26a0789eaaa 2023-07-13 03:16:19,127 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testTableMoveTruncateAndDrop/de12356ae110fb148dc5fed11bfe84b7 empty. 2023-07-13 03:16:19,127 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testTableMoveTruncateAndDrop/01fead1a7e0c2fc4b6e58d7bbd7db30e empty. 2023-07-13 03:16:19,127 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testTableMoveTruncateAndDrop/83f7e87289c2c60762ebf26a0789eaaa empty. 2023-07-13 03:16:19,127 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testTableMoveTruncateAndDrop/43c7234259f6da500759a6e1f628fe78 empty. 2023-07-13 03:16:19,127 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testTableMoveTruncateAndDrop/78e6979d0d289e5998cdb743fccea0c7 empty. 2023-07-13 03:16:19,130 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testTableMoveTruncateAndDrop/de12356ae110fb148dc5fed11bfe84b7 2023-07-13 03:16:19,130 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testTableMoveTruncateAndDrop/01fead1a7e0c2fc4b6e58d7bbd7db30e 2023-07-13 03:16:19,130 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testTableMoveTruncateAndDrop/83f7e87289c2c60762ebf26a0789eaaa 2023-07-13 03:16:19,130 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testTableMoveTruncateAndDrop/43c7234259f6da500759a6e1f628fe78 2023-07-13 03:16:19,130 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testTableMoveTruncateAndDrop/78e6979d0d289e5998cdb743fccea0c7 2023-07-13 03:16:19,130 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-13 03:16:19,154 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testTableMoveTruncateAndDrop/.tabledesc/.tableinfo.0000000001 2023-07-13 03:16:19,155 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 43c7234259f6da500759a6e1f628fe78, NAME => 'Group_testTableMoveTruncateAndDrop,,1689218178985.43c7234259f6da500759a6e1f628fe78.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp 2023-07-13 03:16:19,155 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 78e6979d0d289e5998cdb743fccea0c7, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689218178985.78e6979d0d289e5998cdb743fccea0c7.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp 2023-07-13 03:16:19,156 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => de12356ae110fb148dc5fed11bfe84b7, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689218178985.de12356ae110fb148dc5fed11bfe84b7.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp 2023-07-13 03:16:19,201 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689218178985.43c7234259f6da500759a6e1f628fe78.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 03:16:19,202 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 43c7234259f6da500759a6e1f628fe78, disabling compactions & flushes 2023-07-13 03:16:19,202 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689218178985.43c7234259f6da500759a6e1f628fe78. 2023-07-13 03:16:19,202 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689218178985.43c7234259f6da500759a6e1f628fe78. 2023-07-13 03:16:19,202 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689218178985.43c7234259f6da500759a6e1f628fe78. after waiting 0 ms 2023-07-13 03:16:19,202 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689218178985.43c7234259f6da500759a6e1f628fe78. 2023-07-13 03:16:19,202 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689218178985.43c7234259f6da500759a6e1f628fe78. 2023-07-13 03:16:19,206 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 43c7234259f6da500759a6e1f628fe78: 2023-07-13 03:16:19,206 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689218178985.78e6979d0d289e5998cdb743fccea0c7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 03:16:19,207 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 78e6979d0d289e5998cdb743fccea0c7, disabling compactions & flushes 2023-07-13 03:16:19,207 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689218178985.78e6979d0d289e5998cdb743fccea0c7. 2023-07-13 03:16:19,207 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689218178985.78e6979d0d289e5998cdb743fccea0c7. 2023-07-13 03:16:19,208 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689218178985.78e6979d0d289e5998cdb743fccea0c7. after waiting 0 ms 2023-07-13 03:16:19,208 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689218178985.78e6979d0d289e5998cdb743fccea0c7. 2023-07-13 03:16:19,208 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 01fead1a7e0c2fc4b6e58d7bbd7db30e, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689218178985.01fead1a7e0c2fc4b6e58d7bbd7db30e.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp 2023-07-13 03:16:19,208 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689218178985.78e6979d0d289e5998cdb743fccea0c7. 2023-07-13 03:16:19,208 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 78e6979d0d289e5998cdb743fccea0c7: 2023-07-13 03:16:19,209 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 83f7e87289c2c60762ebf26a0789eaaa, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689218178985.83f7e87289c2c60762ebf26a0789eaaa.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp 2023-07-13 03:16:19,211 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689218178985.de12356ae110fb148dc5fed11bfe84b7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 03:16:19,212 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing de12356ae110fb148dc5fed11bfe84b7, disabling compactions & flushes 2023-07-13 03:16:19,213 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689218178985.de12356ae110fb148dc5fed11bfe84b7. 2023-07-13 03:16:19,213 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689218178985.de12356ae110fb148dc5fed11bfe84b7. 2023-07-13 03:16:19,213 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689218178985.de12356ae110fb148dc5fed11bfe84b7. after waiting 0 ms 2023-07-13 03:16:19,213 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689218178985.de12356ae110fb148dc5fed11bfe84b7. 2023-07-13 03:16:19,213 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689218178985.de12356ae110fb148dc5fed11bfe84b7. 2023-07-13 03:16:19,214 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for de12356ae110fb148dc5fed11bfe84b7: 2023-07-13 03:16:19,244 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689218178985.83f7e87289c2c60762ebf26a0789eaaa.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 03:16:19,245 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 83f7e87289c2c60762ebf26a0789eaaa, disabling compactions & flushes 2023-07-13 03:16:19,246 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689218178985.83f7e87289c2c60762ebf26a0789eaaa. 2023-07-13 03:16:19,246 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689218178985.83f7e87289c2c60762ebf26a0789eaaa. 2023-07-13 03:16:19,246 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689218178985.83f7e87289c2c60762ebf26a0789eaaa. after waiting 0 ms 2023-07-13 03:16:19,246 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689218178985.83f7e87289c2c60762ebf26a0789eaaa. 2023-07-13 03:16:19,246 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689218178985.83f7e87289c2c60762ebf26a0789eaaa. 2023-07-13 03:16:19,246 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 83f7e87289c2c60762ebf26a0789eaaa: 2023-07-13 03:16:19,248 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689218178985.01fead1a7e0c2fc4b6e58d7bbd7db30e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 03:16:19,248 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 01fead1a7e0c2fc4b6e58d7bbd7db30e, disabling compactions & flushes 2023-07-13 03:16:19,248 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689218178985.01fead1a7e0c2fc4b6e58d7bbd7db30e. 2023-07-13 03:16:19,248 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689218178985.01fead1a7e0c2fc4b6e58d7bbd7db30e. 2023-07-13 03:16:19,248 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689218178985.01fead1a7e0c2fc4b6e58d7bbd7db30e. after waiting 0 ms 2023-07-13 03:16:19,248 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689218178985.01fead1a7e0c2fc4b6e58d7bbd7db30e. 2023-07-13 03:16:19,248 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689218178985.01fead1a7e0c2fc4b6e58d7bbd7db30e. 2023-07-13 03:16:19,248 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 01fead1a7e0c2fc4b6e58d7bbd7db30e: 2023-07-13 03:16:19,252 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=21, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_ADD_TO_META 2023-07-13 03:16:19,254 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689218178985.43c7234259f6da500759a6e1f628fe78.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689218179253"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689218179253"}]},"ts":"1689218179253"} 2023-07-13 03:16:19,254 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689218178985.78e6979d0d289e5998cdb743fccea0c7.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689218179253"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689218179253"}]},"ts":"1689218179253"} 2023-07-13 03:16:19,254 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689218178985.de12356ae110fb148dc5fed11bfe84b7.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689218179253"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689218179253"}]},"ts":"1689218179253"} 2023-07-13 03:16:19,254 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689218178985.83f7e87289c2c60762ebf26a0789eaaa.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689218179253"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689218179253"}]},"ts":"1689218179253"} 2023-07-13 03:16:19,254 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689218178985.01fead1a7e0c2fc4b6e58d7bbd7db30e.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689218179253"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689218179253"}]},"ts":"1689218179253"} 2023-07-13 03:16:19,300 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-13 03:16:19,302 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=21, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-13 03:16:19,302 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689218179302"}]},"ts":"1689218179302"} 2023-07-13 03:16:19,304 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLING in hbase:meta 2023-07-13 03:16:19,308 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-13 03:16:19,308 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-13 03:16:19,309 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-13 03:16:19,309 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-13 03:16:19,309 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=22, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=43c7234259f6da500759a6e1f628fe78, ASSIGN}, {pid=23, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=78e6979d0d289e5998cdb743fccea0c7, ASSIGN}, {pid=24, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=de12356ae110fb148dc5fed11bfe84b7, ASSIGN}, {pid=25, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=01fead1a7e0c2fc4b6e58d7bbd7db30e, ASSIGN}, {pid=26, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=83f7e87289c2c60762ebf26a0789eaaa, ASSIGN}] 2023-07-13 03:16:19,313 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=23, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=78e6979d0d289e5998cdb743fccea0c7, ASSIGN 2023-07-13 03:16:19,313 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=22, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=43c7234259f6da500759a6e1f628fe78, ASSIGN 2023-07-13 03:16:19,314 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=24, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=de12356ae110fb148dc5fed11bfe84b7, ASSIGN 2023-07-13 03:16:19,314 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=26, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=83f7e87289c2c60762ebf26a0789eaaa, ASSIGN 2023-07-13 03:16:19,315 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=25, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=01fead1a7e0c2fc4b6e58d7bbd7db30e, ASSIGN 2023-07-13 03:16:19,315 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=26, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=83f7e87289c2c60762ebf26a0789eaaa, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,44171,1689218172445; forceNewPlan=false, retain=false 2023-07-13 03:16:19,315 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=22, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=43c7234259f6da500759a6e1f628fe78, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,44325,1689218176275; forceNewPlan=false, retain=false 2023-07-13 03:16:19,315 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=24, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=de12356ae110fb148dc5fed11bfe84b7, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,44325,1689218176275; forceNewPlan=false, retain=false 2023-07-13 03:16:19,315 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=23, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=78e6979d0d289e5998cdb743fccea0c7, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,44171,1689218172445; forceNewPlan=false, retain=false 2023-07-13 03:16:19,316 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=25, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=01fead1a7e0c2fc4b6e58d7bbd7db30e, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,44325,1689218176275; forceNewPlan=false, retain=false 2023-07-13 03:16:19,319 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-13 03:16:19,466 INFO [jenkins-hbase20:33491] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-13 03:16:19,471 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=24 updating hbase:meta row=de12356ae110fb148dc5fed11bfe84b7, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,44325,1689218176275 2023-07-13 03:16:19,471 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=23 updating hbase:meta row=78e6979d0d289e5998cdb743fccea0c7, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,44171,1689218172445 2023-07-13 03:16:19,471 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=26 updating hbase:meta row=83f7e87289c2c60762ebf26a0789eaaa, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,44171,1689218172445 2023-07-13 03:16:19,471 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=25 updating hbase:meta row=01fead1a7e0c2fc4b6e58d7bbd7db30e, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,44325,1689218176275 2023-07-13 03:16:19,471 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=22 updating hbase:meta row=43c7234259f6da500759a6e1f628fe78, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,44325,1689218176275 2023-07-13 03:16:19,472 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689218178985.83f7e87289c2c60762ebf26a0789eaaa.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689218179471"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689218179471"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689218179471"}]},"ts":"1689218179471"} 2023-07-13 03:16:19,472 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689218178985.78e6979d0d289e5998cdb743fccea0c7.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689218179471"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689218179471"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689218179471"}]},"ts":"1689218179471"} 2023-07-13 03:16:19,472 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689218178985.de12356ae110fb148dc5fed11bfe84b7.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689218179471"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689218179471"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689218179471"}]},"ts":"1689218179471"} 2023-07-13 03:16:19,472 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689218178985.01fead1a7e0c2fc4b6e58d7bbd7db30e.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689218179471"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689218179471"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689218179471"}]},"ts":"1689218179471"} 2023-07-13 03:16:19,472 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689218178985.43c7234259f6da500759a6e1f628fe78.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689218179471"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689218179471"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689218179471"}]},"ts":"1689218179471"} 2023-07-13 03:16:19,474 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=27, ppid=26, state=RUNNABLE; OpenRegionProcedure 83f7e87289c2c60762ebf26a0789eaaa, server=jenkins-hbase20.apache.org,44171,1689218172445}] 2023-07-13 03:16:19,476 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=28, ppid=23, state=RUNNABLE; OpenRegionProcedure 78e6979d0d289e5998cdb743fccea0c7, server=jenkins-hbase20.apache.org,44171,1689218172445}] 2023-07-13 03:16:19,477 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=29, ppid=24, state=RUNNABLE; OpenRegionProcedure de12356ae110fb148dc5fed11bfe84b7, server=jenkins-hbase20.apache.org,44325,1689218176275}] 2023-07-13 03:16:19,481 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=30, ppid=25, state=RUNNABLE; OpenRegionProcedure 01fead1a7e0c2fc4b6e58d7bbd7db30e, server=jenkins-hbase20.apache.org,44325,1689218176275}] 2023-07-13 03:16:19,482 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=31, ppid=22, state=RUNNABLE; OpenRegionProcedure 43c7234259f6da500759a6e1f628fe78, server=jenkins-hbase20.apache.org,44325,1689218176275}] 2023-07-13 03:16:19,621 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-13 03:16:19,633 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689218178985.78e6979d0d289e5998cdb743fccea0c7. 2023-07-13 03:16:19,633 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 78e6979d0d289e5998cdb743fccea0c7, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689218178985.78e6979d0d289e5998cdb743fccea0c7.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-13 03:16:19,634 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 78e6979d0d289e5998cdb743fccea0c7 2023-07-13 03:16:19,634 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689218178985.78e6979d0d289e5998cdb743fccea0c7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 03:16:19,634 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 78e6979d0d289e5998cdb743fccea0c7 2023-07-13 03:16:19,634 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 78e6979d0d289e5998cdb743fccea0c7 2023-07-13 03:16:19,635 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689218178985.43c7234259f6da500759a6e1f628fe78. 2023-07-13 03:16:19,636 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 43c7234259f6da500759a6e1f628fe78, NAME => 'Group_testTableMoveTruncateAndDrop,,1689218178985.43c7234259f6da500759a6e1f628fe78.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-13 03:16:19,636 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 43c7234259f6da500759a6e1f628fe78 2023-07-13 03:16:19,636 INFO [StoreOpener-78e6979d0d289e5998cdb743fccea0c7-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 78e6979d0d289e5998cdb743fccea0c7 2023-07-13 03:16:19,636 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689218178985.43c7234259f6da500759a6e1f628fe78.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 03:16:19,636 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 43c7234259f6da500759a6e1f628fe78 2023-07-13 03:16:19,636 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 43c7234259f6da500759a6e1f628fe78 2023-07-13 03:16:19,638 INFO [StoreOpener-43c7234259f6da500759a6e1f628fe78-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 43c7234259f6da500759a6e1f628fe78 2023-07-13 03:16:19,638 DEBUG [StoreOpener-78e6979d0d289e5998cdb743fccea0c7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testTableMoveTruncateAndDrop/78e6979d0d289e5998cdb743fccea0c7/f 2023-07-13 03:16:19,638 DEBUG [StoreOpener-78e6979d0d289e5998cdb743fccea0c7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testTableMoveTruncateAndDrop/78e6979d0d289e5998cdb743fccea0c7/f 2023-07-13 03:16:19,639 INFO [StoreOpener-78e6979d0d289e5998cdb743fccea0c7-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 78e6979d0d289e5998cdb743fccea0c7 columnFamilyName f 2023-07-13 03:16:19,639 INFO [StoreOpener-78e6979d0d289e5998cdb743fccea0c7-1] regionserver.HStore(310): Store=78e6979d0d289e5998cdb743fccea0c7/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 03:16:19,640 DEBUG [StoreOpener-43c7234259f6da500759a6e1f628fe78-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testTableMoveTruncateAndDrop/43c7234259f6da500759a6e1f628fe78/f 2023-07-13 03:16:19,641 DEBUG [StoreOpener-43c7234259f6da500759a6e1f628fe78-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testTableMoveTruncateAndDrop/43c7234259f6da500759a6e1f628fe78/f 2023-07-13 03:16:19,641 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testTableMoveTruncateAndDrop/78e6979d0d289e5998cdb743fccea0c7 2023-07-13 03:16:19,642 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testTableMoveTruncateAndDrop/78e6979d0d289e5998cdb743fccea0c7 2023-07-13 03:16:19,642 INFO [StoreOpener-43c7234259f6da500759a6e1f628fe78-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 43c7234259f6da500759a6e1f628fe78 columnFamilyName f 2023-07-13 03:16:19,643 INFO [StoreOpener-43c7234259f6da500759a6e1f628fe78-1] regionserver.HStore(310): Store=43c7234259f6da500759a6e1f628fe78/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 03:16:19,644 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testTableMoveTruncateAndDrop/43c7234259f6da500759a6e1f628fe78 2023-07-13 03:16:19,646 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testTableMoveTruncateAndDrop/43c7234259f6da500759a6e1f628fe78 2023-07-13 03:16:19,646 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 78e6979d0d289e5998cdb743fccea0c7 2023-07-13 03:16:19,648 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 43c7234259f6da500759a6e1f628fe78 2023-07-13 03:16:19,649 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testTableMoveTruncateAndDrop/78e6979d0d289e5998cdb743fccea0c7/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 03:16:19,649 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 78e6979d0d289e5998cdb743fccea0c7; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11129030560, jitterRate=0.036471739411354065}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 03:16:19,649 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 78e6979d0d289e5998cdb743fccea0c7: 2023-07-13 03:16:19,650 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689218178985.78e6979d0d289e5998cdb743fccea0c7., pid=28, masterSystemTime=1689218179627 2023-07-13 03:16:19,652 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689218178985.78e6979d0d289e5998cdb743fccea0c7. 2023-07-13 03:16:19,652 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689218178985.78e6979d0d289e5998cdb743fccea0c7. 2023-07-13 03:16:19,652 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689218178985.83f7e87289c2c60762ebf26a0789eaaa. 2023-07-13 03:16:19,653 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 83f7e87289c2c60762ebf26a0789eaaa, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689218178985.83f7e87289c2c60762ebf26a0789eaaa.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-13 03:16:19,653 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=23 updating hbase:meta row=78e6979d0d289e5998cdb743fccea0c7, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,44171,1689218172445 2023-07-13 03:16:19,653 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 83f7e87289c2c60762ebf26a0789eaaa 2023-07-13 03:16:19,653 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689218178985.83f7e87289c2c60762ebf26a0789eaaa.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 03:16:19,653 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 83f7e87289c2c60762ebf26a0789eaaa 2023-07-13 03:16:19,653 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689218178985.78e6979d0d289e5998cdb743fccea0c7.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689218179653"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689218179653"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689218179653"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689218179653"}]},"ts":"1689218179653"} 2023-07-13 03:16:19,653 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 83f7e87289c2c60762ebf26a0789eaaa 2023-07-13 03:16:19,658 INFO [StoreOpener-83f7e87289c2c60762ebf26a0789eaaa-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 83f7e87289c2c60762ebf26a0789eaaa 2023-07-13 03:16:19,660 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testTableMoveTruncateAndDrop/43c7234259f6da500759a6e1f628fe78/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 03:16:19,661 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 43c7234259f6da500759a6e1f628fe78; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10322414880, jitterRate=-0.03865019977092743}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 03:16:19,661 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 43c7234259f6da500759a6e1f628fe78: 2023-07-13 03:16:19,661 DEBUG [StoreOpener-83f7e87289c2c60762ebf26a0789eaaa-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testTableMoveTruncateAndDrop/83f7e87289c2c60762ebf26a0789eaaa/f 2023-07-13 03:16:19,661 DEBUG [StoreOpener-83f7e87289c2c60762ebf26a0789eaaa-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testTableMoveTruncateAndDrop/83f7e87289c2c60762ebf26a0789eaaa/f 2023-07-13 03:16:19,662 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689218178985.43c7234259f6da500759a6e1f628fe78., pid=31, masterSystemTime=1689218179630 2023-07-13 03:16:19,662 INFO [StoreOpener-83f7e87289c2c60762ebf26a0789eaaa-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 83f7e87289c2c60762ebf26a0789eaaa columnFamilyName f 2023-07-13 03:16:19,662 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=28, resume processing ppid=23 2023-07-13 03:16:19,663 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=28, ppid=23, state=SUCCESS; OpenRegionProcedure 78e6979d0d289e5998cdb743fccea0c7, server=jenkins-hbase20.apache.org,44171,1689218172445 in 179 msec 2023-07-13 03:16:19,663 INFO [StoreOpener-83f7e87289c2c60762ebf26a0789eaaa-1] regionserver.HStore(310): Store=83f7e87289c2c60762ebf26a0789eaaa/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 03:16:19,667 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689218178985.43c7234259f6da500759a6e1f628fe78. 2023-07-13 03:16:19,667 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testTableMoveTruncateAndDrop/83f7e87289c2c60762ebf26a0789eaaa 2023-07-13 03:16:19,667 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689218178985.43c7234259f6da500759a6e1f628fe78. 2023-07-13 03:16:19,667 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689218178985.de12356ae110fb148dc5fed11bfe84b7. 2023-07-13 03:16:19,668 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => de12356ae110fb148dc5fed11bfe84b7, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689218178985.de12356ae110fb148dc5fed11bfe84b7.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-13 03:16:19,668 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testTableMoveTruncateAndDrop/83f7e87289c2c60762ebf26a0789eaaa 2023-07-13 03:16:19,668 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop de12356ae110fb148dc5fed11bfe84b7 2023-07-13 03:16:19,668 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=23, ppid=21, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=78e6979d0d289e5998cdb743fccea0c7, ASSIGN in 354 msec 2023-07-13 03:16:19,668 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689218178985.de12356ae110fb148dc5fed11bfe84b7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 03:16:19,668 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=22 updating hbase:meta row=43c7234259f6da500759a6e1f628fe78, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,44325,1689218176275 2023-07-13 03:16:19,668 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for de12356ae110fb148dc5fed11bfe84b7 2023-07-13 03:16:19,668 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for de12356ae110fb148dc5fed11bfe84b7 2023-07-13 03:16:19,668 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689218178985.43c7234259f6da500759a6e1f628fe78.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689218179668"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689218179668"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689218179668"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689218179668"}]},"ts":"1689218179668"} 2023-07-13 03:16:19,670 INFO [StoreOpener-de12356ae110fb148dc5fed11bfe84b7-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region de12356ae110fb148dc5fed11bfe84b7 2023-07-13 03:16:19,672 DEBUG [StoreOpener-de12356ae110fb148dc5fed11bfe84b7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testTableMoveTruncateAndDrop/de12356ae110fb148dc5fed11bfe84b7/f 2023-07-13 03:16:19,672 DEBUG [StoreOpener-de12356ae110fb148dc5fed11bfe84b7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testTableMoveTruncateAndDrop/de12356ae110fb148dc5fed11bfe84b7/f 2023-07-13 03:16:19,673 INFO [StoreOpener-de12356ae110fb148dc5fed11bfe84b7-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region de12356ae110fb148dc5fed11bfe84b7 columnFamilyName f 2023-07-13 03:16:19,673 INFO [StoreOpener-de12356ae110fb148dc5fed11bfe84b7-1] regionserver.HStore(310): Store=de12356ae110fb148dc5fed11bfe84b7/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 03:16:19,674 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 83f7e87289c2c60762ebf26a0789eaaa 2023-07-13 03:16:19,675 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testTableMoveTruncateAndDrop/de12356ae110fb148dc5fed11bfe84b7 2023-07-13 03:16:19,677 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=31, resume processing ppid=22 2023-07-13 03:16:19,677 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testTableMoveTruncateAndDrop/83f7e87289c2c60762ebf26a0789eaaa/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 03:16:19,678 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=31, ppid=22, state=SUCCESS; OpenRegionProcedure 43c7234259f6da500759a6e1f628fe78, server=jenkins-hbase20.apache.org,44325,1689218176275 in 189 msec 2023-07-13 03:16:19,678 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 83f7e87289c2c60762ebf26a0789eaaa; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11205806400, jitterRate=0.04362204670906067}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 03:16:19,678 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 83f7e87289c2c60762ebf26a0789eaaa: 2023-07-13 03:16:19,679 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689218178985.83f7e87289c2c60762ebf26a0789eaaa., pid=27, masterSystemTime=1689218179627 2023-07-13 03:16:19,680 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=22, ppid=21, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=43c7234259f6da500759a6e1f628fe78, ASSIGN in 369 msec 2023-07-13 03:16:19,681 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689218178985.83f7e87289c2c60762ebf26a0789eaaa. 2023-07-13 03:16:19,681 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689218178985.83f7e87289c2c60762ebf26a0789eaaa. 2023-07-13 03:16:19,682 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=26 updating hbase:meta row=83f7e87289c2c60762ebf26a0789eaaa, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,44171,1689218172445 2023-07-13 03:16:19,682 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689218178985.83f7e87289c2c60762ebf26a0789eaaa.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689218179682"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689218179682"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689218179682"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689218179682"}]},"ts":"1689218179682"} 2023-07-13 03:16:19,683 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testTableMoveTruncateAndDrop/de12356ae110fb148dc5fed11bfe84b7 2023-07-13 03:16:19,690 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for de12356ae110fb148dc5fed11bfe84b7 2023-07-13 03:16:19,694 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=27, resume processing ppid=26 2023-07-13 03:16:19,694 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=27, ppid=26, state=SUCCESS; OpenRegionProcedure 83f7e87289c2c60762ebf26a0789eaaa, server=jenkins-hbase20.apache.org,44171,1689218172445 in 210 msec 2023-07-13 03:16:19,696 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=26, ppid=21, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=83f7e87289c2c60762ebf26a0789eaaa, ASSIGN in 385 msec 2023-07-13 03:16:19,698 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testTableMoveTruncateAndDrop/de12356ae110fb148dc5fed11bfe84b7/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 03:16:19,699 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened de12356ae110fb148dc5fed11bfe84b7; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11615998080, jitterRate=0.08182412385940552}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 03:16:19,699 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for de12356ae110fb148dc5fed11bfe84b7: 2023-07-13 03:16:19,700 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689218178985.de12356ae110fb148dc5fed11bfe84b7., pid=29, masterSystemTime=1689218179630 2023-07-13 03:16:19,703 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689218178985.de12356ae110fb148dc5fed11bfe84b7. 2023-07-13 03:16:19,703 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689218178985.de12356ae110fb148dc5fed11bfe84b7. 2023-07-13 03:16:19,704 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689218178985.01fead1a7e0c2fc4b6e58d7bbd7db30e. 2023-07-13 03:16:19,704 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 01fead1a7e0c2fc4b6e58d7bbd7db30e, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689218178985.01fead1a7e0c2fc4b6e58d7bbd7db30e.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-13 03:16:19,704 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 01fead1a7e0c2fc4b6e58d7bbd7db30e 2023-07-13 03:16:19,704 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689218178985.01fead1a7e0c2fc4b6e58d7bbd7db30e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 03:16:19,705 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 01fead1a7e0c2fc4b6e58d7bbd7db30e 2023-07-13 03:16:19,705 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 01fead1a7e0c2fc4b6e58d7bbd7db30e 2023-07-13 03:16:19,705 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=24 updating hbase:meta row=de12356ae110fb148dc5fed11bfe84b7, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,44325,1689218176275 2023-07-13 03:16:19,705 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689218178985.de12356ae110fb148dc5fed11bfe84b7.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689218179705"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689218179705"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689218179705"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689218179705"}]},"ts":"1689218179705"} 2023-07-13 03:16:19,707 INFO [StoreOpener-01fead1a7e0c2fc4b6e58d7bbd7db30e-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 01fead1a7e0c2fc4b6e58d7bbd7db30e 2023-07-13 03:16:19,709 DEBUG [StoreOpener-01fead1a7e0c2fc4b6e58d7bbd7db30e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testTableMoveTruncateAndDrop/01fead1a7e0c2fc4b6e58d7bbd7db30e/f 2023-07-13 03:16:19,709 DEBUG [StoreOpener-01fead1a7e0c2fc4b6e58d7bbd7db30e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testTableMoveTruncateAndDrop/01fead1a7e0c2fc4b6e58d7bbd7db30e/f 2023-07-13 03:16:19,710 INFO [StoreOpener-01fead1a7e0c2fc4b6e58d7bbd7db30e-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 01fead1a7e0c2fc4b6e58d7bbd7db30e columnFamilyName f 2023-07-13 03:16:19,710 INFO [StoreOpener-01fead1a7e0c2fc4b6e58d7bbd7db30e-1] regionserver.HStore(310): Store=01fead1a7e0c2fc4b6e58d7bbd7db30e/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 03:16:19,712 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testTableMoveTruncateAndDrop/01fead1a7e0c2fc4b6e58d7bbd7db30e 2023-07-13 03:16:19,712 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=29, resume processing ppid=24 2023-07-13 03:16:19,712 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=29, ppid=24, state=SUCCESS; OpenRegionProcedure de12356ae110fb148dc5fed11bfe84b7, server=jenkins-hbase20.apache.org,44325,1689218176275 in 231 msec 2023-07-13 03:16:19,712 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testTableMoveTruncateAndDrop/01fead1a7e0c2fc4b6e58d7bbd7db30e 2023-07-13 03:16:19,713 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=24, ppid=21, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=de12356ae110fb148dc5fed11bfe84b7, ASSIGN in 403 msec 2023-07-13 03:16:19,715 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 01fead1a7e0c2fc4b6e58d7bbd7db30e 2023-07-13 03:16:19,717 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testTableMoveTruncateAndDrop/01fead1a7e0c2fc4b6e58d7bbd7db30e/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 03:16:19,718 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 01fead1a7e0c2fc4b6e58d7bbd7db30e; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10337821920, jitterRate=-0.03721530735492706}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 03:16:19,718 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 01fead1a7e0c2fc4b6e58d7bbd7db30e: 2023-07-13 03:16:19,719 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689218178985.01fead1a7e0c2fc4b6e58d7bbd7db30e., pid=30, masterSystemTime=1689218179630 2023-07-13 03:16:19,721 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689218178985.01fead1a7e0c2fc4b6e58d7bbd7db30e. 2023-07-13 03:16:19,721 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689218178985.01fead1a7e0c2fc4b6e58d7bbd7db30e. 2023-07-13 03:16:19,721 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=25 updating hbase:meta row=01fead1a7e0c2fc4b6e58d7bbd7db30e, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,44325,1689218176275 2023-07-13 03:16:19,721 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689218178985.01fead1a7e0c2fc4b6e58d7bbd7db30e.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689218179721"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689218179721"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689218179721"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689218179721"}]},"ts":"1689218179721"} 2023-07-13 03:16:19,727 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=30, resume processing ppid=25 2023-07-13 03:16:19,727 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=30, ppid=25, state=SUCCESS; OpenRegionProcedure 01fead1a7e0c2fc4b6e58d7bbd7db30e, server=jenkins-hbase20.apache.org,44325,1689218176275 in 242 msec 2023-07-13 03:16:19,731 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=25, resume processing ppid=21 2023-07-13 03:16:19,731 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=25, ppid=21, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=01fead1a7e0c2fc4b6e58d7bbd7db30e, ASSIGN in 418 msec 2023-07-13 03:16:19,732 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=21, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-13 03:16:19,732 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689218179732"}]},"ts":"1689218179732"} 2023-07-13 03:16:19,735 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLED in hbase:meta 2023-07-13 03:16:19,737 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=21, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_POST_OPERATION 2023-07-13 03:16:19,740 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=21, state=SUCCESS; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop in 748 msec 2023-07-13 03:16:20,123 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-13 03:16:20,124 INFO [Listener at localhost.localdomain/36261] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 21 completed 2023-07-13 03:16:20,124 DEBUG [Listener at localhost.localdomain/36261] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testTableMoveTruncateAndDrop get assigned. Timeout = 60000ms 2023-07-13 03:16:20,125 INFO [Listener at localhost.localdomain/36261] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 03:16:20,126 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=37181] ipc.CallRunner(144): callId: 49 service: ClientService methodName: Scan size: 95 connection: 148.251.75.209:51396 deadline: 1689218240126, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase20.apache.org port=44171 startCode=1689218172445. As of locationSeqNum=16. 2023-07-13 03:16:20,229 DEBUG [hconnection-0x2b42746c-shared-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-13 03:16:20,235 INFO [RS-EventLoopGroup-4-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:45392, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-13 03:16:20,246 INFO [Listener at localhost.localdomain/36261] hbase.HBaseTestingUtility(3484): All regions for table Group_testTableMoveTruncateAndDrop assigned to meta. Checking AM states. 2023-07-13 03:16:20,246 INFO [Listener at localhost.localdomain/36261] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 03:16:20,247 INFO [Listener at localhost.localdomain/36261] hbase.HBaseTestingUtility(3504): All regions for table Group_testTableMoveTruncateAndDrop assigned. 2023-07-13 03:16:20,247 INFO [Listener at localhost.localdomain/36261] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 03:16:20,252 DEBUG [Listener at localhost.localdomain/36261] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-13 03:16:20,254 INFO [RS-EventLoopGroup-5-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:44136, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-13 03:16:20,257 DEBUG [Listener at localhost.localdomain/36261] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-13 03:16:20,261 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:39460, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-13 03:16:20,262 DEBUG [Listener at localhost.localdomain/36261] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-13 03:16:20,266 INFO [RS-EventLoopGroup-4-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:45398, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-13 03:16:20,268 DEBUG [Listener at localhost.localdomain/36261] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-13 03:16:20,270 INFO [RS-EventLoopGroup-7-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:35512, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-13 03:16:20,282 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, table=Group_testTableMoveTruncateAndDrop 2023-07-13 03:16:20,282 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-13 03:16:20,283 INFO [Listener at localhost.localdomain/36261] rsgroup.TestRSGroupsAdmin1(307): Moving table Group_testTableMoveTruncateAndDrop to Group_testTableMoveTruncateAndDrop_1739069322 2023-07-13 03:16:20,294 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [Group_testTableMoveTruncateAndDrop] to rsgroup Group_testTableMoveTruncateAndDrop_1739069322 2023-07-13 03:16:20,298 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:20,299 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1739069322 2023-07-13 03:16:20,300 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 03:16:20,300 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 03:16:20,305 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testTableMoveTruncateAndDrop to RSGroup Group_testTableMoveTruncateAndDrop_1739069322 2023-07-13 03:16:20,305 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(345): Moving region 43c7234259f6da500759a6e1f628fe78 to RSGroup Group_testTableMoveTruncateAndDrop_1739069322 2023-07-13 03:16:20,306 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-13 03:16:20,306 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-13 03:16:20,306 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-13 03:16:20,306 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-13 03:16:20,306 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-13 03:16:20,307 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] procedure2.ProcedureExecutor(1029): Stored pid=32, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=43c7234259f6da500759a6e1f628fe78, REOPEN/MOVE 2023-07-13 03:16:20,307 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(345): Moving region 78e6979d0d289e5998cdb743fccea0c7 to RSGroup Group_testTableMoveTruncateAndDrop_1739069322 2023-07-13 03:16:20,308 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=32, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=43c7234259f6da500759a6e1f628fe78, REOPEN/MOVE 2023-07-13 03:16:20,309 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-13 03:16:20,309 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-13 03:16:20,309 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-13 03:16:20,309 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-13 03:16:20,309 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-13 03:16:20,310 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=43c7234259f6da500759a6e1f628fe78, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,44325,1689218176275 2023-07-13 03:16:20,310 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] procedure2.ProcedureExecutor(1029): Stored pid=33, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=78e6979d0d289e5998cdb743fccea0c7, REOPEN/MOVE 2023-07-13 03:16:20,310 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(345): Moving region de12356ae110fb148dc5fed11bfe84b7 to RSGroup Group_testTableMoveTruncateAndDrop_1739069322 2023-07-13 03:16:20,310 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689218178985.43c7234259f6da500759a6e1f628fe78.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689218180310"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689218180310"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689218180310"}]},"ts":"1689218180310"} 2023-07-13 03:16:20,311 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=33, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=78e6979d0d289e5998cdb743fccea0c7, REOPEN/MOVE 2023-07-13 03:16:20,311 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-13 03:16:20,312 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-13 03:16:20,312 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-13 03:16:20,312 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-13 03:16:20,312 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-13 03:16:20,313 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=33 updating hbase:meta row=78e6979d0d289e5998cdb743fccea0c7, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,44171,1689218172445 2023-07-13 03:16:20,313 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689218178985.78e6979d0d289e5998cdb743fccea0c7.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689218180313"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689218180313"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689218180313"}]},"ts":"1689218180313"} 2023-07-13 03:16:20,313 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] procedure2.ProcedureExecutor(1029): Stored pid=34, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=de12356ae110fb148dc5fed11bfe84b7, REOPEN/MOVE 2023-07-13 03:16:20,313 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(345): Moving region 01fead1a7e0c2fc4b6e58d7bbd7db30e to RSGroup Group_testTableMoveTruncateAndDrop_1739069322 2023-07-13 03:16:20,314 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=34, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=de12356ae110fb148dc5fed11bfe84b7, REOPEN/MOVE 2023-07-13 03:16:20,315 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-13 03:16:20,315 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-13 03:16:20,315 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-13 03:16:20,315 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-13 03:16:20,315 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-13 03:16:20,317 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=36, ppid=32, state=RUNNABLE; CloseRegionProcedure 43c7234259f6da500759a6e1f628fe78, server=jenkins-hbase20.apache.org,44325,1689218176275}] 2023-07-13 03:16:20,317 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=34 updating hbase:meta row=de12356ae110fb148dc5fed11bfe84b7, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,44325,1689218176275 2023-07-13 03:16:20,317 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689218178985.de12356ae110fb148dc5fed11bfe84b7.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689218180317"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689218180317"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689218180317"}]},"ts":"1689218180317"} 2023-07-13 03:16:20,317 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=37, ppid=33, state=RUNNABLE; CloseRegionProcedure 78e6979d0d289e5998cdb743fccea0c7, server=jenkins-hbase20.apache.org,44171,1689218172445}] 2023-07-13 03:16:20,319 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] procedure2.ProcedureExecutor(1029): Stored pid=35, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=01fead1a7e0c2fc4b6e58d7bbd7db30e, REOPEN/MOVE 2023-07-13 03:16:20,319 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(345): Moving region 83f7e87289c2c60762ebf26a0789eaaa to RSGroup Group_testTableMoveTruncateAndDrop_1739069322 2023-07-13 03:16:20,319 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-13 03:16:20,320 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-13 03:16:20,320 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=38, ppid=34, state=RUNNABLE; CloseRegionProcedure de12356ae110fb148dc5fed11bfe84b7, server=jenkins-hbase20.apache.org,44325,1689218176275}] 2023-07-13 03:16:20,321 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=35, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=01fead1a7e0c2fc4b6e58d7bbd7db30e, REOPEN/MOVE 2023-07-13 03:16:20,320 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-13 03:16:20,322 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-13 03:16:20,322 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-13 03:16:20,324 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] procedure2.ProcedureExecutor(1029): Stored pid=39, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=83f7e87289c2c60762ebf26a0789eaaa, REOPEN/MOVE 2023-07-13 03:16:20,324 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=35 updating hbase:meta row=01fead1a7e0c2fc4b6e58d7bbd7db30e, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,44325,1689218176275 2023-07-13 03:16:20,324 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(286): Moving 5 region(s) to group Group_testTableMoveTruncateAndDrop_1739069322, current retry=0 2023-07-13 03:16:20,324 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689218178985.01fead1a7e0c2fc4b6e58d7bbd7db30e.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689218180324"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689218180324"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689218180324"}]},"ts":"1689218180324"} 2023-07-13 03:16:20,326 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=39, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=83f7e87289c2c60762ebf26a0789eaaa, REOPEN/MOVE 2023-07-13 03:16:20,327 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=39 updating hbase:meta row=83f7e87289c2c60762ebf26a0789eaaa, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,44171,1689218172445 2023-07-13 03:16:20,327 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=40, ppid=35, state=RUNNABLE; CloseRegionProcedure 01fead1a7e0c2fc4b6e58d7bbd7db30e, server=jenkins-hbase20.apache.org,44325,1689218176275}] 2023-07-13 03:16:20,327 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689218178985.83f7e87289c2c60762ebf26a0789eaaa.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689218180327"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689218180327"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689218180327"}]},"ts":"1689218180327"} 2023-07-13 03:16:20,329 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=41, ppid=39, state=RUNNABLE; CloseRegionProcedure 83f7e87289c2c60762ebf26a0789eaaa, server=jenkins-hbase20.apache.org,44171,1689218172445}] 2023-07-13 03:16:20,473 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 01fead1a7e0c2fc4b6e58d7bbd7db30e 2023-07-13 03:16:20,474 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 01fead1a7e0c2fc4b6e58d7bbd7db30e, disabling compactions & flushes 2023-07-13 03:16:20,474 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689218178985.01fead1a7e0c2fc4b6e58d7bbd7db30e. 2023-07-13 03:16:20,474 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689218178985.01fead1a7e0c2fc4b6e58d7bbd7db30e. 2023-07-13 03:16:20,475 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689218178985.01fead1a7e0c2fc4b6e58d7bbd7db30e. after waiting 0 ms 2023-07-13 03:16:20,475 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689218178985.01fead1a7e0c2fc4b6e58d7bbd7db30e. 2023-07-13 03:16:20,476 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 78e6979d0d289e5998cdb743fccea0c7 2023-07-13 03:16:20,477 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 78e6979d0d289e5998cdb743fccea0c7, disabling compactions & flushes 2023-07-13 03:16:20,477 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689218178985.78e6979d0d289e5998cdb743fccea0c7. 2023-07-13 03:16:20,477 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689218178985.78e6979d0d289e5998cdb743fccea0c7. 2023-07-13 03:16:20,477 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689218178985.78e6979d0d289e5998cdb743fccea0c7. after waiting 0 ms 2023-07-13 03:16:20,477 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689218178985.78e6979d0d289e5998cdb743fccea0c7. 2023-07-13 03:16:20,493 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testTableMoveTruncateAndDrop/01fead1a7e0c2fc4b6e58d7bbd7db30e/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-13 03:16:20,493 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testTableMoveTruncateAndDrop/78e6979d0d289e5998cdb743fccea0c7/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-13 03:16:20,495 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689218178985.01fead1a7e0c2fc4b6e58d7bbd7db30e. 2023-07-13 03:16:20,495 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 01fead1a7e0c2fc4b6e58d7bbd7db30e: 2023-07-13 03:16:20,495 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(3513): Adding 01fead1a7e0c2fc4b6e58d7bbd7db30e move to jenkins-hbase20.apache.org,37181,1689218172183 record at close sequenceid=2 2023-07-13 03:16:20,495 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689218178985.78e6979d0d289e5998cdb743fccea0c7. 2023-07-13 03:16:20,495 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 78e6979d0d289e5998cdb743fccea0c7: 2023-07-13 03:16:20,495 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(3513): Adding 78e6979d0d289e5998cdb743fccea0c7 move to jenkins-hbase20.apache.org,32993,1689218172776 record at close sequenceid=2 2023-07-13 03:16:20,497 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 01fead1a7e0c2fc4b6e58d7bbd7db30e 2023-07-13 03:16:20,497 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close de12356ae110fb148dc5fed11bfe84b7 2023-07-13 03:16:20,499 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing de12356ae110fb148dc5fed11bfe84b7, disabling compactions & flushes 2023-07-13 03:16:20,499 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689218178985.de12356ae110fb148dc5fed11bfe84b7. 2023-07-13 03:16:20,499 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689218178985.de12356ae110fb148dc5fed11bfe84b7. 2023-07-13 03:16:20,499 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689218178985.de12356ae110fb148dc5fed11bfe84b7. after waiting 0 ms 2023-07-13 03:16:20,499 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689218178985.de12356ae110fb148dc5fed11bfe84b7. 2023-07-13 03:16:20,499 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=35 updating hbase:meta row=01fead1a7e0c2fc4b6e58d7bbd7db30e, regionState=CLOSED 2023-07-13 03:16:20,499 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689218178985.01fead1a7e0c2fc4b6e58d7bbd7db30e.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689218180499"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689218180499"}]},"ts":"1689218180499"} 2023-07-13 03:16:20,500 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 78e6979d0d289e5998cdb743fccea0c7 2023-07-13 03:16:20,500 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 83f7e87289c2c60762ebf26a0789eaaa 2023-07-13 03:16:20,501 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 83f7e87289c2c60762ebf26a0789eaaa, disabling compactions & flushes 2023-07-13 03:16:20,501 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689218178985.83f7e87289c2c60762ebf26a0789eaaa. 2023-07-13 03:16:20,501 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689218178985.83f7e87289c2c60762ebf26a0789eaaa. 2023-07-13 03:16:20,501 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689218178985.83f7e87289c2c60762ebf26a0789eaaa. after waiting 0 ms 2023-07-13 03:16:20,501 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689218178985.83f7e87289c2c60762ebf26a0789eaaa. 2023-07-13 03:16:20,501 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=33 updating hbase:meta row=78e6979d0d289e5998cdb743fccea0c7, regionState=CLOSED 2023-07-13 03:16:20,502 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689218178985.78e6979d0d289e5998cdb743fccea0c7.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689218180501"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689218180501"}]},"ts":"1689218180501"} 2023-07-13 03:16:20,508 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testTableMoveTruncateAndDrop/83f7e87289c2c60762ebf26a0789eaaa/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-13 03:16:20,510 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=40, resume processing ppid=35 2023-07-13 03:16:20,510 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=40, ppid=35, state=SUCCESS; CloseRegionProcedure 01fead1a7e0c2fc4b6e58d7bbd7db30e, server=jenkins-hbase20.apache.org,44325,1689218176275 in 176 msec 2023-07-13 03:16:20,510 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689218178985.83f7e87289c2c60762ebf26a0789eaaa. 2023-07-13 03:16:20,511 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 83f7e87289c2c60762ebf26a0789eaaa: 2023-07-13 03:16:20,512 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(3513): Adding 83f7e87289c2c60762ebf26a0789eaaa move to jenkins-hbase20.apache.org,37181,1689218172183 record at close sequenceid=2 2023-07-13 03:16:20,512 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testTableMoveTruncateAndDrop/de12356ae110fb148dc5fed11bfe84b7/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-13 03:16:20,512 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=37, resume processing ppid=33 2023-07-13 03:16:20,512 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=35, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=01fead1a7e0c2fc4b6e58d7bbd7db30e, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase20.apache.org,37181,1689218172183; forceNewPlan=false, retain=false 2023-07-13 03:16:20,512 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=37, ppid=33, state=SUCCESS; CloseRegionProcedure 78e6979d0d289e5998cdb743fccea0c7, server=jenkins-hbase20.apache.org,44171,1689218172445 in 187 msec 2023-07-13 03:16:20,513 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689218178985.de12356ae110fb148dc5fed11bfe84b7. 2023-07-13 03:16:20,513 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for de12356ae110fb148dc5fed11bfe84b7: 2023-07-13 03:16:20,513 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(3513): Adding de12356ae110fb148dc5fed11bfe84b7 move to jenkins-hbase20.apache.org,37181,1689218172183 record at close sequenceid=2 2023-07-13 03:16:20,514 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=33, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=78e6979d0d289e5998cdb743fccea0c7, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase20.apache.org,32993,1689218172776; forceNewPlan=false, retain=false 2023-07-13 03:16:20,515 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 83f7e87289c2c60762ebf26a0789eaaa 2023-07-13 03:16:20,516 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=39 updating hbase:meta row=83f7e87289c2c60762ebf26a0789eaaa, regionState=CLOSED 2023-07-13 03:16:20,516 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689218178985.83f7e87289c2c60762ebf26a0789eaaa.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689218180515"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689218180515"}]},"ts":"1689218180515"} 2023-07-13 03:16:20,517 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed de12356ae110fb148dc5fed11bfe84b7 2023-07-13 03:16:20,517 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 43c7234259f6da500759a6e1f628fe78 2023-07-13 03:16:20,518 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 43c7234259f6da500759a6e1f628fe78, disabling compactions & flushes 2023-07-13 03:16:20,518 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689218178985.43c7234259f6da500759a6e1f628fe78. 2023-07-13 03:16:20,518 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689218178985.43c7234259f6da500759a6e1f628fe78. 2023-07-13 03:16:20,518 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689218178985.43c7234259f6da500759a6e1f628fe78. after waiting 0 ms 2023-07-13 03:16:20,518 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689218178985.43c7234259f6da500759a6e1f628fe78. 2023-07-13 03:16:20,518 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=34 updating hbase:meta row=de12356ae110fb148dc5fed11bfe84b7, regionState=CLOSED 2023-07-13 03:16:20,518 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689218178985.de12356ae110fb148dc5fed11bfe84b7.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689218180518"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689218180518"}]},"ts":"1689218180518"} 2023-07-13 03:16:20,524 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=41, resume processing ppid=39 2023-07-13 03:16:20,524 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=38, resume processing ppid=34 2023-07-13 03:16:20,524 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=41, ppid=39, state=SUCCESS; CloseRegionProcedure 83f7e87289c2c60762ebf26a0789eaaa, server=jenkins-hbase20.apache.org,44171,1689218172445 in 190 msec 2023-07-13 03:16:20,524 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=38, ppid=34, state=SUCCESS; CloseRegionProcedure de12356ae110fb148dc5fed11bfe84b7, server=jenkins-hbase20.apache.org,44325,1689218176275 in 201 msec 2023-07-13 03:16:20,525 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testTableMoveTruncateAndDrop/43c7234259f6da500759a6e1f628fe78/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-13 03:16:20,525 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=39, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=83f7e87289c2c60762ebf26a0789eaaa, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase20.apache.org,37181,1689218172183; forceNewPlan=false, retain=false 2023-07-13 03:16:20,525 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=34, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=de12356ae110fb148dc5fed11bfe84b7, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase20.apache.org,37181,1689218172183; forceNewPlan=false, retain=false 2023-07-13 03:16:20,526 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689218178985.43c7234259f6da500759a6e1f628fe78. 2023-07-13 03:16:20,526 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 43c7234259f6da500759a6e1f628fe78: 2023-07-13 03:16:20,526 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(3513): Adding 43c7234259f6da500759a6e1f628fe78 move to jenkins-hbase20.apache.org,37181,1689218172183 record at close sequenceid=2 2023-07-13 03:16:20,529 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 43c7234259f6da500759a6e1f628fe78 2023-07-13 03:16:20,530 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=43c7234259f6da500759a6e1f628fe78, regionState=CLOSED 2023-07-13 03:16:20,530 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689218178985.43c7234259f6da500759a6e1f628fe78.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689218180530"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689218180530"}]},"ts":"1689218180530"} 2023-07-13 03:16:20,534 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=36, resume processing ppid=32 2023-07-13 03:16:20,534 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=36, ppid=32, state=SUCCESS; CloseRegionProcedure 43c7234259f6da500759a6e1f628fe78, server=jenkins-hbase20.apache.org,44325,1689218176275 in 215 msec 2023-07-13 03:16:20,535 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=32, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=43c7234259f6da500759a6e1f628fe78, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase20.apache.org,37181,1689218172183; forceNewPlan=false, retain=false 2023-07-13 03:16:20,542 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-13 03:16:20,620 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-13 03:16:20,620 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver Metrics about HBase MasterObservers 2023-07-13 03:16:20,621 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-13 03:16:20,621 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-07-13 03:16:20,621 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-13 03:16:20,621 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint Metrics about HBase MasterObservers 2023-07-13 03:16:20,663 INFO [jenkins-hbase20:33491] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-13 03:16:20,663 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=34 updating hbase:meta row=de12356ae110fb148dc5fed11bfe84b7, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,37181,1689218172183 2023-07-13 03:16:20,663 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=39 updating hbase:meta row=83f7e87289c2c60762ebf26a0789eaaa, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,37181,1689218172183 2023-07-13 03:16:20,663 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689218178985.de12356ae110fb148dc5fed11bfe84b7.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689218180663"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689218180663"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689218180663"}]},"ts":"1689218180663"} 2023-07-13 03:16:20,664 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=43c7234259f6da500759a6e1f628fe78, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,37181,1689218172183 2023-07-13 03:16:20,664 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689218178985.43c7234259f6da500759a6e1f628fe78.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689218180664"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689218180664"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689218180664"}]},"ts":"1689218180664"} 2023-07-13 03:16:20,663 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=35 updating hbase:meta row=01fead1a7e0c2fc4b6e58d7bbd7db30e, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,37181,1689218172183 2023-07-13 03:16:20,663 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689218178985.83f7e87289c2c60762ebf26a0789eaaa.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689218180663"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689218180663"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689218180663"}]},"ts":"1689218180663"} 2023-07-13 03:16:20,664 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689218178985.01fead1a7e0c2fc4b6e58d7bbd7db30e.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689218180663"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689218180663"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689218180663"}]},"ts":"1689218180663"} 2023-07-13 03:16:20,665 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=33 updating hbase:meta row=78e6979d0d289e5998cdb743fccea0c7, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,32993,1689218172776 2023-07-13 03:16:20,665 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689218178985.78e6979d0d289e5998cdb743fccea0c7.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689218180665"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689218180665"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689218180665"}]},"ts":"1689218180665"} 2023-07-13 03:16:20,666 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=42, ppid=34, state=RUNNABLE; OpenRegionProcedure de12356ae110fb148dc5fed11bfe84b7, server=jenkins-hbase20.apache.org,37181,1689218172183}] 2023-07-13 03:16:20,668 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=43, ppid=32, state=RUNNABLE; OpenRegionProcedure 43c7234259f6da500759a6e1f628fe78, server=jenkins-hbase20.apache.org,37181,1689218172183}] 2023-07-13 03:16:20,670 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=44, ppid=39, state=RUNNABLE; OpenRegionProcedure 83f7e87289c2c60762ebf26a0789eaaa, server=jenkins-hbase20.apache.org,37181,1689218172183}] 2023-07-13 03:16:20,674 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=45, ppid=35, state=RUNNABLE; OpenRegionProcedure 01fead1a7e0c2fc4b6e58d7bbd7db30e, server=jenkins-hbase20.apache.org,37181,1689218172183}] 2023-07-13 03:16:20,676 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=46, ppid=33, state=RUNNABLE; OpenRegionProcedure 78e6979d0d289e5998cdb743fccea0c7, server=jenkins-hbase20.apache.org,32993,1689218172776}] 2023-07-13 03:16:20,825 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689218178985.01fead1a7e0c2fc4b6e58d7bbd7db30e. 2023-07-13 03:16:20,825 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 01fead1a7e0c2fc4b6e58d7bbd7db30e, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689218178985.01fead1a7e0c2fc4b6e58d7bbd7db30e.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-13 03:16:20,825 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 01fead1a7e0c2fc4b6e58d7bbd7db30e 2023-07-13 03:16:20,826 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689218178985.01fead1a7e0c2fc4b6e58d7bbd7db30e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 03:16:20,826 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 01fead1a7e0c2fc4b6e58d7bbd7db30e 2023-07-13 03:16:20,826 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 01fead1a7e0c2fc4b6e58d7bbd7db30e 2023-07-13 03:16:20,827 INFO [StoreOpener-01fead1a7e0c2fc4b6e58d7bbd7db30e-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 01fead1a7e0c2fc4b6e58d7bbd7db30e 2023-07-13 03:16:20,829 DEBUG [StoreOpener-01fead1a7e0c2fc4b6e58d7bbd7db30e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testTableMoveTruncateAndDrop/01fead1a7e0c2fc4b6e58d7bbd7db30e/f 2023-07-13 03:16:20,829 DEBUG [StoreOpener-01fead1a7e0c2fc4b6e58d7bbd7db30e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testTableMoveTruncateAndDrop/01fead1a7e0c2fc4b6e58d7bbd7db30e/f 2023-07-13 03:16:20,829 INFO [StoreOpener-01fead1a7e0c2fc4b6e58d7bbd7db30e-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 01fead1a7e0c2fc4b6e58d7bbd7db30e columnFamilyName f 2023-07-13 03:16:20,831 INFO [StoreOpener-01fead1a7e0c2fc4b6e58d7bbd7db30e-1] regionserver.HStore(310): Store=01fead1a7e0c2fc4b6e58d7bbd7db30e/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 03:16:20,833 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testTableMoveTruncateAndDrop/01fead1a7e0c2fc4b6e58d7bbd7db30e 2023-07-13 03:16:20,833 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase20.apache.org,32993,1689218172776 2023-07-13 03:16:20,833 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-13 03:16:20,842 INFO [RS-EventLoopGroup-5-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:44138, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-13 03:16:20,843 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testTableMoveTruncateAndDrop/01fead1a7e0c2fc4b6e58d7bbd7db30e 2023-07-13 03:16:20,856 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689218178985.78e6979d0d289e5998cdb743fccea0c7. 2023-07-13 03:16:20,856 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 78e6979d0d289e5998cdb743fccea0c7, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689218178985.78e6979d0d289e5998cdb743fccea0c7.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-13 03:16:20,857 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 78e6979d0d289e5998cdb743fccea0c7 2023-07-13 03:16:20,857 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689218178985.78e6979d0d289e5998cdb743fccea0c7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 03:16:20,857 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 01fead1a7e0c2fc4b6e58d7bbd7db30e 2023-07-13 03:16:20,857 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 78e6979d0d289e5998cdb743fccea0c7 2023-07-13 03:16:20,857 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 78e6979d0d289e5998cdb743fccea0c7 2023-07-13 03:16:20,858 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 01fead1a7e0c2fc4b6e58d7bbd7db30e; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9498615840, jitterRate=-0.11537246406078339}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 03:16:20,858 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 01fead1a7e0c2fc4b6e58d7bbd7db30e: 2023-07-13 03:16:20,859 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689218178985.01fead1a7e0c2fc4b6e58d7bbd7db30e., pid=45, masterSystemTime=1689218180820 2023-07-13 03:16:20,865 INFO [StoreOpener-78e6979d0d289e5998cdb743fccea0c7-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 78e6979d0d289e5998cdb743fccea0c7 2023-07-13 03:16:20,867 DEBUG [StoreOpener-78e6979d0d289e5998cdb743fccea0c7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testTableMoveTruncateAndDrop/78e6979d0d289e5998cdb743fccea0c7/f 2023-07-13 03:16:20,867 DEBUG [StoreOpener-78e6979d0d289e5998cdb743fccea0c7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testTableMoveTruncateAndDrop/78e6979d0d289e5998cdb743fccea0c7/f 2023-07-13 03:16:20,868 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689218178985.01fead1a7e0c2fc4b6e58d7bbd7db30e. 2023-07-13 03:16:20,868 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689218178985.01fead1a7e0c2fc4b6e58d7bbd7db30e. 2023-07-13 03:16:20,868 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=35 updating hbase:meta row=01fead1a7e0c2fc4b6e58d7bbd7db30e, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase20.apache.org,37181,1689218172183 2023-07-13 03:16:20,868 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689218178985.83f7e87289c2c60762ebf26a0789eaaa. 2023-07-13 03:16:20,868 INFO [StoreOpener-78e6979d0d289e5998cdb743fccea0c7-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 78e6979d0d289e5998cdb743fccea0c7 columnFamilyName f 2023-07-13 03:16:20,868 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689218178985.01fead1a7e0c2fc4b6e58d7bbd7db30e.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689218180868"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689218180868"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689218180868"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689218180868"}]},"ts":"1689218180868"} 2023-07-13 03:16:20,868 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 83f7e87289c2c60762ebf26a0789eaaa, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689218178985.83f7e87289c2c60762ebf26a0789eaaa.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-13 03:16:20,869 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 83f7e87289c2c60762ebf26a0789eaaa 2023-07-13 03:16:20,869 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689218178985.83f7e87289c2c60762ebf26a0789eaaa.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 03:16:20,869 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 83f7e87289c2c60762ebf26a0789eaaa 2023-07-13 03:16:20,869 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 83f7e87289c2c60762ebf26a0789eaaa 2023-07-13 03:16:20,869 INFO [StoreOpener-78e6979d0d289e5998cdb743fccea0c7-1] regionserver.HStore(310): Store=78e6979d0d289e5998cdb743fccea0c7/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 03:16:20,870 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testTableMoveTruncateAndDrop/78e6979d0d289e5998cdb743fccea0c7 2023-07-13 03:16:20,871 INFO [StoreOpener-83f7e87289c2c60762ebf26a0789eaaa-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 83f7e87289c2c60762ebf26a0789eaaa 2023-07-13 03:16:20,872 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testTableMoveTruncateAndDrop/78e6979d0d289e5998cdb743fccea0c7 2023-07-13 03:16:20,874 DEBUG [StoreOpener-83f7e87289c2c60762ebf26a0789eaaa-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testTableMoveTruncateAndDrop/83f7e87289c2c60762ebf26a0789eaaa/f 2023-07-13 03:16:20,874 DEBUG [StoreOpener-83f7e87289c2c60762ebf26a0789eaaa-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testTableMoveTruncateAndDrop/83f7e87289c2c60762ebf26a0789eaaa/f 2023-07-13 03:16:20,875 INFO [StoreOpener-83f7e87289c2c60762ebf26a0789eaaa-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 83f7e87289c2c60762ebf26a0789eaaa columnFamilyName f 2023-07-13 03:16:20,875 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=45, resume processing ppid=35 2023-07-13 03:16:20,875 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=45, ppid=35, state=SUCCESS; OpenRegionProcedure 01fead1a7e0c2fc4b6e58d7bbd7db30e, server=jenkins-hbase20.apache.org,37181,1689218172183 in 197 msec 2023-07-13 03:16:20,875 INFO [StoreOpener-83f7e87289c2c60762ebf26a0789eaaa-1] regionserver.HStore(310): Store=83f7e87289c2c60762ebf26a0789eaaa/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 03:16:20,877 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testTableMoveTruncateAndDrop/83f7e87289c2c60762ebf26a0789eaaa 2023-07-13 03:16:20,878 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=35, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=01fead1a7e0c2fc4b6e58d7bbd7db30e, REOPEN/MOVE in 560 msec 2023-07-13 03:16:20,878 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 78e6979d0d289e5998cdb743fccea0c7 2023-07-13 03:16:20,879 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testTableMoveTruncateAndDrop/83f7e87289c2c60762ebf26a0789eaaa 2023-07-13 03:16:20,879 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 78e6979d0d289e5998cdb743fccea0c7; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10188944640, jitterRate=-0.05108058452606201}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 03:16:20,880 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 78e6979d0d289e5998cdb743fccea0c7: 2023-07-13 03:16:20,881 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689218178985.78e6979d0d289e5998cdb743fccea0c7., pid=46, masterSystemTime=1689218180833 2023-07-13 03:16:20,887 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689218178985.78e6979d0d289e5998cdb743fccea0c7. 2023-07-13 03:16:20,887 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 83f7e87289c2c60762ebf26a0789eaaa 2023-07-13 03:16:20,887 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=33 updating hbase:meta row=78e6979d0d289e5998cdb743fccea0c7, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase20.apache.org,32993,1689218172776 2023-07-13 03:16:20,889 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689218178985.78e6979d0d289e5998cdb743fccea0c7. 2023-07-13 03:16:20,889 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689218178985.78e6979d0d289e5998cdb743fccea0c7.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689218180887"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689218180887"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689218180887"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689218180887"}]},"ts":"1689218180887"} 2023-07-13 03:16:20,890 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 83f7e87289c2c60762ebf26a0789eaaa; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10490111200, jitterRate=-0.023032262921333313}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 03:16:20,890 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 83f7e87289c2c60762ebf26a0789eaaa: 2023-07-13 03:16:20,891 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689218178985.83f7e87289c2c60762ebf26a0789eaaa., pid=44, masterSystemTime=1689218180820 2023-07-13 03:16:20,896 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=46, resume processing ppid=33 2023-07-13 03:16:20,896 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=46, ppid=33, state=SUCCESS; OpenRegionProcedure 78e6979d0d289e5998cdb743fccea0c7, server=jenkins-hbase20.apache.org,32993,1689218172776 in 215 msec 2023-07-13 03:16:20,897 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689218178985.83f7e87289c2c60762ebf26a0789eaaa. 2023-07-13 03:16:20,897 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689218178985.83f7e87289c2c60762ebf26a0789eaaa. 2023-07-13 03:16:20,897 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689218178985.43c7234259f6da500759a6e1f628fe78. 2023-07-13 03:16:20,898 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 43c7234259f6da500759a6e1f628fe78, NAME => 'Group_testTableMoveTruncateAndDrop,,1689218178985.43c7234259f6da500759a6e1f628fe78.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-13 03:16:20,898 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=39 updating hbase:meta row=83f7e87289c2c60762ebf26a0789eaaa, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase20.apache.org,37181,1689218172183 2023-07-13 03:16:20,898 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689218178985.83f7e87289c2c60762ebf26a0789eaaa.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689218180897"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689218180897"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689218180897"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689218180897"}]},"ts":"1689218180897"} 2023-07-13 03:16:20,898 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 43c7234259f6da500759a6e1f628fe78 2023-07-13 03:16:20,898 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689218178985.43c7234259f6da500759a6e1f628fe78.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 03:16:20,898 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 43c7234259f6da500759a6e1f628fe78 2023-07-13 03:16:20,899 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 43c7234259f6da500759a6e1f628fe78 2023-07-13 03:16:20,901 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=33, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=78e6979d0d289e5998cdb743fccea0c7, REOPEN/MOVE in 587 msec 2023-07-13 03:16:20,901 INFO [StoreOpener-43c7234259f6da500759a6e1f628fe78-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 43c7234259f6da500759a6e1f628fe78 2023-07-13 03:16:20,907 DEBUG [StoreOpener-43c7234259f6da500759a6e1f628fe78-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testTableMoveTruncateAndDrop/43c7234259f6da500759a6e1f628fe78/f 2023-07-13 03:16:20,907 DEBUG [StoreOpener-43c7234259f6da500759a6e1f628fe78-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testTableMoveTruncateAndDrop/43c7234259f6da500759a6e1f628fe78/f 2023-07-13 03:16:20,907 INFO [StoreOpener-43c7234259f6da500759a6e1f628fe78-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 43c7234259f6da500759a6e1f628fe78 columnFamilyName f 2023-07-13 03:16:20,909 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=44, resume processing ppid=39 2023-07-13 03:16:20,909 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=44, ppid=39, state=SUCCESS; OpenRegionProcedure 83f7e87289c2c60762ebf26a0789eaaa, server=jenkins-hbase20.apache.org,37181,1689218172183 in 231 msec 2023-07-13 03:16:20,909 INFO [StoreOpener-43c7234259f6da500759a6e1f628fe78-1] regionserver.HStore(310): Store=43c7234259f6da500759a6e1f628fe78/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 03:16:20,911 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testTableMoveTruncateAndDrop/43c7234259f6da500759a6e1f628fe78 2023-07-13 03:16:20,911 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=39, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=83f7e87289c2c60762ebf26a0789eaaa, REOPEN/MOVE in 587 msec 2023-07-13 03:16:20,912 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testTableMoveTruncateAndDrop/43c7234259f6da500759a6e1f628fe78 2023-07-13 03:16:20,916 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 43c7234259f6da500759a6e1f628fe78 2023-07-13 03:16:20,917 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 43c7234259f6da500759a6e1f628fe78; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10057890400, jitterRate=-0.0632859617471695}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 03:16:20,918 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 43c7234259f6da500759a6e1f628fe78: 2023-07-13 03:16:20,919 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689218178985.43c7234259f6da500759a6e1f628fe78., pid=43, masterSystemTime=1689218180820 2023-07-13 03:16:20,922 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689218178985.43c7234259f6da500759a6e1f628fe78. 2023-07-13 03:16:20,922 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689218178985.43c7234259f6da500759a6e1f628fe78. 2023-07-13 03:16:20,923 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689218178985.de12356ae110fb148dc5fed11bfe84b7. 2023-07-13 03:16:20,923 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => de12356ae110fb148dc5fed11bfe84b7, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689218178985.de12356ae110fb148dc5fed11bfe84b7.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-13 03:16:20,923 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop de12356ae110fb148dc5fed11bfe84b7 2023-07-13 03:16:20,923 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689218178985.de12356ae110fb148dc5fed11bfe84b7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 03:16:20,923 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for de12356ae110fb148dc5fed11bfe84b7 2023-07-13 03:16:20,923 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for de12356ae110fb148dc5fed11bfe84b7 2023-07-13 03:16:20,924 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=43c7234259f6da500759a6e1f628fe78, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase20.apache.org,37181,1689218172183 2023-07-13 03:16:20,924 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689218178985.43c7234259f6da500759a6e1f628fe78.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689218180923"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689218180923"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689218180923"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689218180923"}]},"ts":"1689218180923"} 2023-07-13 03:16:20,926 INFO [StoreOpener-de12356ae110fb148dc5fed11bfe84b7-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region de12356ae110fb148dc5fed11bfe84b7 2023-07-13 03:16:20,928 DEBUG [StoreOpener-de12356ae110fb148dc5fed11bfe84b7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testTableMoveTruncateAndDrop/de12356ae110fb148dc5fed11bfe84b7/f 2023-07-13 03:16:20,928 DEBUG [StoreOpener-de12356ae110fb148dc5fed11bfe84b7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testTableMoveTruncateAndDrop/de12356ae110fb148dc5fed11bfe84b7/f 2023-07-13 03:16:20,928 INFO [StoreOpener-de12356ae110fb148dc5fed11bfe84b7-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region de12356ae110fb148dc5fed11bfe84b7 columnFamilyName f 2023-07-13 03:16:20,929 INFO [StoreOpener-de12356ae110fb148dc5fed11bfe84b7-1] regionserver.HStore(310): Store=de12356ae110fb148dc5fed11bfe84b7/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 03:16:20,929 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=43, resume processing ppid=32 2023-07-13 03:16:20,929 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=43, ppid=32, state=SUCCESS; OpenRegionProcedure 43c7234259f6da500759a6e1f628fe78, server=jenkins-hbase20.apache.org,37181,1689218172183 in 258 msec 2023-07-13 03:16:20,930 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testTableMoveTruncateAndDrop/de12356ae110fb148dc5fed11bfe84b7 2023-07-13 03:16:20,932 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=32, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=43c7234259f6da500759a6e1f628fe78, REOPEN/MOVE in 623 msec 2023-07-13 03:16:20,932 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testTableMoveTruncateAndDrop/de12356ae110fb148dc5fed11bfe84b7 2023-07-13 03:16:20,935 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for de12356ae110fb148dc5fed11bfe84b7 2023-07-13 03:16:20,937 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened de12356ae110fb148dc5fed11bfe84b7; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10946022080, jitterRate=0.019427746534347534}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 03:16:20,937 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for de12356ae110fb148dc5fed11bfe84b7: 2023-07-13 03:16:20,938 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689218178985.de12356ae110fb148dc5fed11bfe84b7., pid=42, masterSystemTime=1689218180820 2023-07-13 03:16:20,940 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689218178985.de12356ae110fb148dc5fed11bfe84b7. 2023-07-13 03:16:20,940 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689218178985.de12356ae110fb148dc5fed11bfe84b7. 2023-07-13 03:16:20,941 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=34 updating hbase:meta row=de12356ae110fb148dc5fed11bfe84b7, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase20.apache.org,37181,1689218172183 2023-07-13 03:16:20,941 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689218178985.de12356ae110fb148dc5fed11bfe84b7.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689218180940"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689218180940"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689218180940"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689218180940"}]},"ts":"1689218180940"} 2023-07-13 03:16:20,945 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=42, resume processing ppid=34 2023-07-13 03:16:20,945 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=42, ppid=34, state=SUCCESS; OpenRegionProcedure de12356ae110fb148dc5fed11bfe84b7, server=jenkins-hbase20.apache.org,37181,1689218172183 in 277 msec 2023-07-13 03:16:20,947 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=34, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=de12356ae110fb148dc5fed11bfe84b7, REOPEN/MOVE in 633 msec 2023-07-13 03:16:21,324 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] procedure.ProcedureSyncWait(216): waitFor pid=32 2023-07-13 03:16:21,325 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testTableMoveTruncateAndDrop] moved to target group Group_testTableMoveTruncateAndDrop_1739069322. 2023-07-13 03:16:21,325 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-13 03:16:21,332 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:21,332 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:21,337 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, table=Group_testTableMoveTruncateAndDrop 2023-07-13 03:16:21,337 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-13 03:16:21,338 INFO [Listener at localhost.localdomain/36261] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 03:16:21,346 INFO [Listener at localhost.localdomain/36261] client.HBaseAdmin$15(890): Started disable of Group_testTableMoveTruncateAndDrop 2023-07-13 03:16:21,351 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.HMaster$11(2418): Client=jenkins//148.251.75.209 disable Group_testTableMoveTruncateAndDrop 2023-07-13 03:16:21,358 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] procedure2.ProcedureExecutor(1029): Stored pid=47, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-13 03:16:21,363 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689218181363"}]},"ts":"1689218181363"} 2023-07-13 03:16:21,365 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(1230): Checking to see if procedure is done pid=47 2023-07-13 03:16:21,365 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLING in hbase:meta 2023-07-13 03:16:21,367 INFO [PEWorker-2] procedure.DisableTableProcedure(293): Set Group_testTableMoveTruncateAndDrop to state=DISABLING 2023-07-13 03:16:21,372 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=48, ppid=47, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=43c7234259f6da500759a6e1f628fe78, UNASSIGN}, {pid=49, ppid=47, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=78e6979d0d289e5998cdb743fccea0c7, UNASSIGN}, {pid=50, ppid=47, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=de12356ae110fb148dc5fed11bfe84b7, UNASSIGN}, {pid=51, ppid=47, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=01fead1a7e0c2fc4b6e58d7bbd7db30e, UNASSIGN}, {pid=52, ppid=47, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=83f7e87289c2c60762ebf26a0789eaaa, UNASSIGN}] 2023-07-13 03:16:21,375 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=49, ppid=47, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=78e6979d0d289e5998cdb743fccea0c7, UNASSIGN 2023-07-13 03:16:21,375 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=48, ppid=47, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=43c7234259f6da500759a6e1f628fe78, UNASSIGN 2023-07-13 03:16:21,375 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=51, ppid=47, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=01fead1a7e0c2fc4b6e58d7bbd7db30e, UNASSIGN 2023-07-13 03:16:21,376 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=52, ppid=47, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=83f7e87289c2c60762ebf26a0789eaaa, UNASSIGN 2023-07-13 03:16:21,376 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=50, ppid=47, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=de12356ae110fb148dc5fed11bfe84b7, UNASSIGN 2023-07-13 03:16:21,377 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=48 updating hbase:meta row=43c7234259f6da500759a6e1f628fe78, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,37181,1689218172183 2023-07-13 03:16:21,377 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=50 updating hbase:meta row=de12356ae110fb148dc5fed11bfe84b7, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,37181,1689218172183 2023-07-13 03:16:21,377 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=49 updating hbase:meta row=78e6979d0d289e5998cdb743fccea0c7, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,32993,1689218172776 2023-07-13 03:16:21,377 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=51 updating hbase:meta row=01fead1a7e0c2fc4b6e58d7bbd7db30e, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,37181,1689218172183 2023-07-13 03:16:21,377 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=52 updating hbase:meta row=83f7e87289c2c60762ebf26a0789eaaa, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,37181,1689218172183 2023-07-13 03:16:21,378 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689218178985.01fead1a7e0c2fc4b6e58d7bbd7db30e.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689218181377"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689218181377"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689218181377"}]},"ts":"1689218181377"} 2023-07-13 03:16:21,377 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689218178985.78e6979d0d289e5998cdb743fccea0c7.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689218181377"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689218181377"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689218181377"}]},"ts":"1689218181377"} 2023-07-13 03:16:21,377 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689218178985.de12356ae110fb148dc5fed11bfe84b7.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689218181377"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689218181377"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689218181377"}]},"ts":"1689218181377"} 2023-07-13 03:16:21,377 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689218178985.43c7234259f6da500759a6e1f628fe78.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689218181377"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689218181377"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689218181377"}]},"ts":"1689218181377"} 2023-07-13 03:16:21,378 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689218178985.83f7e87289c2c60762ebf26a0789eaaa.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689218181377"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689218181377"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689218181377"}]},"ts":"1689218181377"} 2023-07-13 03:16:21,380 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=53, ppid=51, state=RUNNABLE; CloseRegionProcedure 01fead1a7e0c2fc4b6e58d7bbd7db30e, server=jenkins-hbase20.apache.org,37181,1689218172183}] 2023-07-13 03:16:21,381 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=54, ppid=49, state=RUNNABLE; CloseRegionProcedure 78e6979d0d289e5998cdb743fccea0c7, server=jenkins-hbase20.apache.org,32993,1689218172776}] 2023-07-13 03:16:21,386 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=55, ppid=50, state=RUNNABLE; CloseRegionProcedure de12356ae110fb148dc5fed11bfe84b7, server=jenkins-hbase20.apache.org,37181,1689218172183}] 2023-07-13 03:16:21,386 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=56, ppid=48, state=RUNNABLE; CloseRegionProcedure 43c7234259f6da500759a6e1f628fe78, server=jenkins-hbase20.apache.org,37181,1689218172183}] 2023-07-13 03:16:21,388 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=57, ppid=52, state=RUNNABLE; CloseRegionProcedure 83f7e87289c2c60762ebf26a0789eaaa, server=jenkins-hbase20.apache.org,37181,1689218172183}] 2023-07-13 03:16:21,467 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(1230): Checking to see if procedure is done pid=47 2023-07-13 03:16:21,534 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close de12356ae110fb148dc5fed11bfe84b7 2023-07-13 03:16:21,536 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing de12356ae110fb148dc5fed11bfe84b7, disabling compactions & flushes 2023-07-13 03:16:21,536 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689218178985.de12356ae110fb148dc5fed11bfe84b7. 2023-07-13 03:16:21,536 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689218178985.de12356ae110fb148dc5fed11bfe84b7. 2023-07-13 03:16:21,536 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689218178985.de12356ae110fb148dc5fed11bfe84b7. after waiting 0 ms 2023-07-13 03:16:21,536 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689218178985.de12356ae110fb148dc5fed11bfe84b7. 2023-07-13 03:16:21,538 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 78e6979d0d289e5998cdb743fccea0c7 2023-07-13 03:16:21,539 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 78e6979d0d289e5998cdb743fccea0c7, disabling compactions & flushes 2023-07-13 03:16:21,539 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689218178985.78e6979d0d289e5998cdb743fccea0c7. 2023-07-13 03:16:21,539 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689218178985.78e6979d0d289e5998cdb743fccea0c7. 2023-07-13 03:16:21,539 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689218178985.78e6979d0d289e5998cdb743fccea0c7. after waiting 0 ms 2023-07-13 03:16:21,539 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689218178985.78e6979d0d289e5998cdb743fccea0c7. 2023-07-13 03:16:21,546 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testTableMoveTruncateAndDrop/de12356ae110fb148dc5fed11bfe84b7/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-13 03:16:21,547 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689218178985.de12356ae110fb148dc5fed11bfe84b7. 2023-07-13 03:16:21,547 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for de12356ae110fb148dc5fed11bfe84b7: 2023-07-13 03:16:21,551 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testTableMoveTruncateAndDrop/78e6979d0d289e5998cdb743fccea0c7/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-13 03:16:21,552 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed de12356ae110fb148dc5fed11bfe84b7 2023-07-13 03:16:21,552 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689218178985.78e6979d0d289e5998cdb743fccea0c7. 2023-07-13 03:16:21,553 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 83f7e87289c2c60762ebf26a0789eaaa 2023-07-13 03:16:21,553 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 78e6979d0d289e5998cdb743fccea0c7: 2023-07-13 03:16:21,554 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 83f7e87289c2c60762ebf26a0789eaaa, disabling compactions & flushes 2023-07-13 03:16:21,554 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689218178985.83f7e87289c2c60762ebf26a0789eaaa. 2023-07-13 03:16:21,554 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689218178985.83f7e87289c2c60762ebf26a0789eaaa. 2023-07-13 03:16:21,554 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689218178985.83f7e87289c2c60762ebf26a0789eaaa. after waiting 0 ms 2023-07-13 03:16:21,554 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689218178985.83f7e87289c2c60762ebf26a0789eaaa. 2023-07-13 03:16:21,554 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=50 updating hbase:meta row=de12356ae110fb148dc5fed11bfe84b7, regionState=CLOSED 2023-07-13 03:16:21,554 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689218178985.de12356ae110fb148dc5fed11bfe84b7.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689218181554"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689218181554"}]},"ts":"1689218181554"} 2023-07-13 03:16:21,558 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 78e6979d0d289e5998cdb743fccea0c7 2023-07-13 03:16:21,558 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=49 updating hbase:meta row=78e6979d0d289e5998cdb743fccea0c7, regionState=CLOSED 2023-07-13 03:16:21,559 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689218178985.78e6979d0d289e5998cdb743fccea0c7.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689218181558"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689218181558"}]},"ts":"1689218181558"} 2023-07-13 03:16:21,561 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testTableMoveTruncateAndDrop/83f7e87289c2c60762ebf26a0789eaaa/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-13 03:16:21,562 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689218178985.83f7e87289c2c60762ebf26a0789eaaa. 2023-07-13 03:16:21,562 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 83f7e87289c2c60762ebf26a0789eaaa: 2023-07-13 03:16:21,563 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=55, resume processing ppid=50 2023-07-13 03:16:21,563 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=55, ppid=50, state=SUCCESS; CloseRegionProcedure de12356ae110fb148dc5fed11bfe84b7, server=jenkins-hbase20.apache.org,37181,1689218172183 in 172 msec 2023-07-13 03:16:21,564 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 83f7e87289c2c60762ebf26a0789eaaa 2023-07-13 03:16:21,564 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 43c7234259f6da500759a6e1f628fe78 2023-07-13 03:16:21,565 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=54, resume processing ppid=49 2023-07-13 03:16:21,565 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=52 updating hbase:meta row=83f7e87289c2c60762ebf26a0789eaaa, regionState=CLOSED 2023-07-13 03:16:21,565 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=54, ppid=49, state=SUCCESS; CloseRegionProcedure 78e6979d0d289e5998cdb743fccea0c7, server=jenkins-hbase20.apache.org,32993,1689218172776 in 180 msec 2023-07-13 03:16:21,565 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689218178985.83f7e87289c2c60762ebf26a0789eaaa.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689218181565"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689218181565"}]},"ts":"1689218181565"} 2023-07-13 03:16:21,566 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 43c7234259f6da500759a6e1f628fe78, disabling compactions & flushes 2023-07-13 03:16:21,567 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689218178985.43c7234259f6da500759a6e1f628fe78. 2023-07-13 03:16:21,567 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689218178985.43c7234259f6da500759a6e1f628fe78. 2023-07-13 03:16:21,567 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689218178985.43c7234259f6da500759a6e1f628fe78. after waiting 0 ms 2023-07-13 03:16:21,567 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689218178985.43c7234259f6da500759a6e1f628fe78. 2023-07-13 03:16:21,568 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=50, ppid=47, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=de12356ae110fb148dc5fed11bfe84b7, UNASSIGN in 193 msec 2023-07-13 03:16:21,570 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=49, ppid=47, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=78e6979d0d289e5998cdb743fccea0c7, UNASSIGN in 195 msec 2023-07-13 03:16:21,574 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=57, resume processing ppid=52 2023-07-13 03:16:21,574 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=57, ppid=52, state=SUCCESS; CloseRegionProcedure 83f7e87289c2c60762ebf26a0789eaaa, server=jenkins-hbase20.apache.org,37181,1689218172183 in 182 msec 2023-07-13 03:16:21,576 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testTableMoveTruncateAndDrop/43c7234259f6da500759a6e1f628fe78/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-13 03:16:21,576 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=52, ppid=47, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=83f7e87289c2c60762ebf26a0789eaaa, UNASSIGN in 202 msec 2023-07-13 03:16:21,578 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689218178985.43c7234259f6da500759a6e1f628fe78. 2023-07-13 03:16:21,578 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 43c7234259f6da500759a6e1f628fe78: 2023-07-13 03:16:21,580 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 43c7234259f6da500759a6e1f628fe78 2023-07-13 03:16:21,580 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 01fead1a7e0c2fc4b6e58d7bbd7db30e 2023-07-13 03:16:21,581 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 01fead1a7e0c2fc4b6e58d7bbd7db30e, disabling compactions & flushes 2023-07-13 03:16:21,582 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689218178985.01fead1a7e0c2fc4b6e58d7bbd7db30e. 2023-07-13 03:16:21,582 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689218178985.01fead1a7e0c2fc4b6e58d7bbd7db30e. 2023-07-13 03:16:21,582 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689218178985.01fead1a7e0c2fc4b6e58d7bbd7db30e. after waiting 0 ms 2023-07-13 03:16:21,582 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689218178985.01fead1a7e0c2fc4b6e58d7bbd7db30e. 2023-07-13 03:16:21,582 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=48 updating hbase:meta row=43c7234259f6da500759a6e1f628fe78, regionState=CLOSED 2023-07-13 03:16:21,582 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689218178985.43c7234259f6da500759a6e1f628fe78.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689218181582"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689218181582"}]},"ts":"1689218181582"} 2023-07-13 03:16:21,588 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=56, resume processing ppid=48 2023-07-13 03:16:21,588 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=56, ppid=48, state=SUCCESS; CloseRegionProcedure 43c7234259f6da500759a6e1f628fe78, server=jenkins-hbase20.apache.org,37181,1689218172183 in 198 msec 2023-07-13 03:16:21,589 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testTableMoveTruncateAndDrop/01fead1a7e0c2fc4b6e58d7bbd7db30e/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-13 03:16:21,590 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689218178985.01fead1a7e0c2fc4b6e58d7bbd7db30e. 2023-07-13 03:16:21,590 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 01fead1a7e0c2fc4b6e58d7bbd7db30e: 2023-07-13 03:16:21,590 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=48, ppid=47, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=43c7234259f6da500759a6e1f628fe78, UNASSIGN in 218 msec 2023-07-13 03:16:21,592 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 01fead1a7e0c2fc4b6e58d7bbd7db30e 2023-07-13 03:16:21,595 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=51 updating hbase:meta row=01fead1a7e0c2fc4b6e58d7bbd7db30e, regionState=CLOSED 2023-07-13 03:16:21,595 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689218178985.01fead1a7e0c2fc4b6e58d7bbd7db30e.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689218181595"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689218181595"}]},"ts":"1689218181595"} 2023-07-13 03:16:21,600 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=53, resume processing ppid=51 2023-07-13 03:16:21,600 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=53, ppid=51, state=SUCCESS; CloseRegionProcedure 01fead1a7e0c2fc4b6e58d7bbd7db30e, server=jenkins-hbase20.apache.org,37181,1689218172183 in 217 msec 2023-07-13 03:16:21,603 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=51, resume processing ppid=47 2023-07-13 03:16:21,603 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=51, ppid=47, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=01fead1a7e0c2fc4b6e58d7bbd7db30e, UNASSIGN in 230 msec 2023-07-13 03:16:21,604 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689218181604"}]},"ts":"1689218181604"} 2023-07-13 03:16:21,606 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLED in hbase:meta 2023-07-13 03:16:21,607 INFO [PEWorker-4] procedure.DisableTableProcedure(305): Set Group_testTableMoveTruncateAndDrop to state=DISABLED 2023-07-13 03:16:21,612 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=47, state=SUCCESS; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop in 257 msec 2023-07-13 03:16:21,668 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(1230): Checking to see if procedure is done pid=47 2023-07-13 03:16:21,669 INFO [Listener at localhost.localdomain/36261] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 47 completed 2023-07-13 03:16:21,670 INFO [Listener at localhost.localdomain/36261] client.HBaseAdmin$13(770): Started truncating Group_testTableMoveTruncateAndDrop 2023-07-13 03:16:21,676 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.HMaster$6(2260): Client=jenkins//148.251.75.209 truncate Group_testTableMoveTruncateAndDrop 2023-07-13 03:16:21,685 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] procedure2.ProcedureExecutor(1029): Stored pid=58, state=RUNNABLE:TRUNCATE_TABLE_PRE_OPERATION; TruncateTableProcedure (table=Group_testTableMoveTruncateAndDrop preserveSplits=true) 2023-07-13 03:16:21,688 DEBUG [PEWorker-1] procedure.TruncateTableProcedure(87): waiting for 'Group_testTableMoveTruncateAndDrop' regions in transition 2023-07-13 03:16:21,690 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(1230): Checking to see if procedure is done pid=58 2023-07-13 03:16:21,703 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testTableMoveTruncateAndDrop/78e6979d0d289e5998cdb743fccea0c7 2023-07-13 03:16:21,703 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testTableMoveTruncateAndDrop/83f7e87289c2c60762ebf26a0789eaaa 2023-07-13 03:16:21,703 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testTableMoveTruncateAndDrop/43c7234259f6da500759a6e1f628fe78 2023-07-13 03:16:21,703 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testTableMoveTruncateAndDrop/01fead1a7e0c2fc4b6e58d7bbd7db30e 2023-07-13 03:16:21,703 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testTableMoveTruncateAndDrop/de12356ae110fb148dc5fed11bfe84b7 2023-07-13 03:16:21,707 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testTableMoveTruncateAndDrop/78e6979d0d289e5998cdb743fccea0c7/f, FileablePath, hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testTableMoveTruncateAndDrop/78e6979d0d289e5998cdb743fccea0c7/recovered.edits] 2023-07-13 03:16:21,707 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testTableMoveTruncateAndDrop/43c7234259f6da500759a6e1f628fe78/f, FileablePath, hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testTableMoveTruncateAndDrop/43c7234259f6da500759a6e1f628fe78/recovered.edits] 2023-07-13 03:16:21,707 DEBUG [HFileArchiver-6] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testTableMoveTruncateAndDrop/83f7e87289c2c60762ebf26a0789eaaa/f, FileablePath, hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testTableMoveTruncateAndDrop/83f7e87289c2c60762ebf26a0789eaaa/recovered.edits] 2023-07-13 03:16:21,707 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testTableMoveTruncateAndDrop/01fead1a7e0c2fc4b6e58d7bbd7db30e/f, FileablePath, hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testTableMoveTruncateAndDrop/01fead1a7e0c2fc4b6e58d7bbd7db30e/recovered.edits] 2023-07-13 03:16:21,707 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testTableMoveTruncateAndDrop/de12356ae110fb148dc5fed11bfe84b7/f, FileablePath, hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testTableMoveTruncateAndDrop/de12356ae110fb148dc5fed11bfe84b7/recovered.edits] 2023-07-13 03:16:21,720 DEBUG [HFileArchiver-6] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testTableMoveTruncateAndDrop/83f7e87289c2c60762ebf26a0789eaaa/recovered.edits/7.seqid to hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/archive/data/default/Group_testTableMoveTruncateAndDrop/83f7e87289c2c60762ebf26a0789eaaa/recovered.edits/7.seqid 2023-07-13 03:16:21,720 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testTableMoveTruncateAndDrop/de12356ae110fb148dc5fed11bfe84b7/recovered.edits/7.seqid to hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/archive/data/default/Group_testTableMoveTruncateAndDrop/de12356ae110fb148dc5fed11bfe84b7/recovered.edits/7.seqid 2023-07-13 03:16:21,720 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testTableMoveTruncateAndDrop/01fead1a7e0c2fc4b6e58d7bbd7db30e/recovered.edits/7.seqid to hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/archive/data/default/Group_testTableMoveTruncateAndDrop/01fead1a7e0c2fc4b6e58d7bbd7db30e/recovered.edits/7.seqid 2023-07-13 03:16:21,720 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testTableMoveTruncateAndDrop/43c7234259f6da500759a6e1f628fe78/recovered.edits/7.seqid to hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/archive/data/default/Group_testTableMoveTruncateAndDrop/43c7234259f6da500759a6e1f628fe78/recovered.edits/7.seqid 2023-07-13 03:16:21,721 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testTableMoveTruncateAndDrop/78e6979d0d289e5998cdb743fccea0c7/recovered.edits/7.seqid to hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/archive/data/default/Group_testTableMoveTruncateAndDrop/78e6979d0d289e5998cdb743fccea0c7/recovered.edits/7.seqid 2023-07-13 03:16:21,721 DEBUG [HFileArchiver-6] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testTableMoveTruncateAndDrop/83f7e87289c2c60762ebf26a0789eaaa 2023-07-13 03:16:21,722 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testTableMoveTruncateAndDrop/de12356ae110fb148dc5fed11bfe84b7 2023-07-13 03:16:21,722 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testTableMoveTruncateAndDrop/01fead1a7e0c2fc4b6e58d7bbd7db30e 2023-07-13 03:16:21,722 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testTableMoveTruncateAndDrop/43c7234259f6da500759a6e1f628fe78 2023-07-13 03:16:21,722 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testTableMoveTruncateAndDrop/78e6979d0d289e5998cdb743fccea0c7 2023-07-13 03:16:21,722 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-13 03:16:21,755 WARN [PEWorker-1] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testTableMoveTruncateAndDrop from hbase:meta 2023-07-13 03:16:21,759 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(421): Removing 'Group_testTableMoveTruncateAndDrop' descriptor. 2023-07-13 03:16:21,760 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(411): Removing 'Group_testTableMoveTruncateAndDrop' from region states. 2023-07-13 03:16:21,760 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,,1689218178985.43c7234259f6da500759a6e1f628fe78.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689218181760"}]},"ts":"9223372036854775807"} 2023-07-13 03:16:21,761 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689218178985.78e6979d0d289e5998cdb743fccea0c7.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689218181760"}]},"ts":"9223372036854775807"} 2023-07-13 03:16:21,761 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689218178985.de12356ae110fb148dc5fed11bfe84b7.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689218181760"}]},"ts":"9223372036854775807"} 2023-07-13 03:16:21,761 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689218178985.01fead1a7e0c2fc4b6e58d7bbd7db30e.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689218181760"}]},"ts":"9223372036854775807"} 2023-07-13 03:16:21,761 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689218178985.83f7e87289c2c60762ebf26a0789eaaa.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689218181760"}]},"ts":"9223372036854775807"} 2023-07-13 03:16:21,765 INFO [PEWorker-1] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-13 03:16:21,765 DEBUG [PEWorker-1] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 43c7234259f6da500759a6e1f628fe78, NAME => 'Group_testTableMoveTruncateAndDrop,,1689218178985.43c7234259f6da500759a6e1f628fe78.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => 78e6979d0d289e5998cdb743fccea0c7, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689218178985.78e6979d0d289e5998cdb743fccea0c7.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => de12356ae110fb148dc5fed11bfe84b7, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689218178985.de12356ae110fb148dc5fed11bfe84b7.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => 01fead1a7e0c2fc4b6e58d7bbd7db30e, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689218178985.01fead1a7e0c2fc4b6e58d7bbd7db30e.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => 83f7e87289c2c60762ebf26a0789eaaa, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689218178985.83f7e87289c2c60762ebf26a0789eaaa.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-13 03:16:21,765 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(415): Marking 'Group_testTableMoveTruncateAndDrop' as deleted. 2023-07-13 03:16:21,765 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689218181765"}]},"ts":"9223372036854775807"} 2023-07-13 03:16:21,768 INFO [PEWorker-1] hbase.MetaTableAccessor(1658): Deleted table Group_testTableMoveTruncateAndDrop state from META 2023-07-13 03:16:21,776 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testTableMoveTruncateAndDrop/35e74b4e8c3e0b1790a78551ec92314c 2023-07-13 03:16:21,776 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8f281f2d9b408fa8823ecad1982f2904 2023-07-13 03:16:21,776 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testTableMoveTruncateAndDrop/01ed4b356b529c42f25f4ba67ec393be 2023-07-13 03:16:21,776 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7249b32725a4e7fd2b734510957ddb0a 2023-07-13 03:16:21,776 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f5c4e050bfbe648d78474a0e89eaae95 2023-07-13 03:16:21,777 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8f281f2d9b408fa8823ecad1982f2904 empty. 2023-07-13 03:16:21,777 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testTableMoveTruncateAndDrop/35e74b4e8c3e0b1790a78551ec92314c empty. 2023-07-13 03:16:21,777 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7249b32725a4e7fd2b734510957ddb0a empty. 2023-07-13 03:16:21,777 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testTableMoveTruncateAndDrop/01ed4b356b529c42f25f4ba67ec393be empty. 2023-07-13 03:16:21,777 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f5c4e050bfbe648d78474a0e89eaae95 empty. 2023-07-13 03:16:21,777 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8f281f2d9b408fa8823ecad1982f2904 2023-07-13 03:16:21,777 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testTableMoveTruncateAndDrop/35e74b4e8c3e0b1790a78551ec92314c 2023-07-13 03:16:21,777 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7249b32725a4e7fd2b734510957ddb0a 2023-07-13 03:16:21,777 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testTableMoveTruncateAndDrop/01ed4b356b529c42f25f4ba67ec393be 2023-07-13 03:16:21,778 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f5c4e050bfbe648d78474a0e89eaae95 2023-07-13 03:16:21,778 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-13 03:16:21,797 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(1230): Checking to see if procedure is done pid=58 2023-07-13 03:16:21,803 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testTableMoveTruncateAndDrop/.tabledesc/.tableinfo.0000000001 2023-07-13 03:16:21,815 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 35e74b4e8c3e0b1790a78551ec92314c, NAME => 'Group_testTableMoveTruncateAndDrop,,1689218181724.35e74b4e8c3e0b1790a78551ec92314c.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp 2023-07-13 03:16:21,815 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 7249b32725a4e7fd2b734510957ddb0a, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689218181724.7249b32725a4e7fd2b734510957ddb0a.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp 2023-07-13 03:16:21,839 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => f5c4e050bfbe648d78474a0e89eaae95, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689218181724.f5c4e050bfbe648d78474a0e89eaae95.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp 2023-07-13 03:16:21,891 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689218181724.35e74b4e8c3e0b1790a78551ec92314c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 03:16:21,891 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 35e74b4e8c3e0b1790a78551ec92314c, disabling compactions & flushes 2023-07-13 03:16:21,891 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689218181724.35e74b4e8c3e0b1790a78551ec92314c. 2023-07-13 03:16:21,891 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689218181724.35e74b4e8c3e0b1790a78551ec92314c. 2023-07-13 03:16:21,891 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689218181724.35e74b4e8c3e0b1790a78551ec92314c. after waiting 0 ms 2023-07-13 03:16:21,891 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689218181724.35e74b4e8c3e0b1790a78551ec92314c. 2023-07-13 03:16:21,891 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689218181724.35e74b4e8c3e0b1790a78551ec92314c. 2023-07-13 03:16:21,891 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 35e74b4e8c3e0b1790a78551ec92314c: 2023-07-13 03:16:21,892 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 01ed4b356b529c42f25f4ba67ec393be, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689218181724.01ed4b356b529c42f25f4ba67ec393be.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp 2023-07-13 03:16:21,892 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689218181724.7249b32725a4e7fd2b734510957ddb0a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 03:16:21,892 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 7249b32725a4e7fd2b734510957ddb0a, disabling compactions & flushes 2023-07-13 03:16:21,893 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689218181724.7249b32725a4e7fd2b734510957ddb0a. 2023-07-13 03:16:21,893 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689218181724.7249b32725a4e7fd2b734510957ddb0a. 2023-07-13 03:16:21,893 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689218181724.7249b32725a4e7fd2b734510957ddb0a. after waiting 0 ms 2023-07-13 03:16:21,893 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689218181724.7249b32725a4e7fd2b734510957ddb0a. 2023-07-13 03:16:21,893 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689218181724.7249b32725a4e7fd2b734510957ddb0a. 2023-07-13 03:16:21,893 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 7249b32725a4e7fd2b734510957ddb0a: 2023-07-13 03:16:21,893 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 8f281f2d9b408fa8823ecad1982f2904, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689218181724.8f281f2d9b408fa8823ecad1982f2904.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp 2023-07-13 03:16:21,893 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689218181724.f5c4e050bfbe648d78474a0e89eaae95.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 03:16:21,893 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing f5c4e050bfbe648d78474a0e89eaae95, disabling compactions & flushes 2023-07-13 03:16:21,894 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689218181724.f5c4e050bfbe648d78474a0e89eaae95. 2023-07-13 03:16:21,894 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689218181724.f5c4e050bfbe648d78474a0e89eaae95. 2023-07-13 03:16:21,894 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689218181724.f5c4e050bfbe648d78474a0e89eaae95. after waiting 0 ms 2023-07-13 03:16:21,894 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689218181724.f5c4e050bfbe648d78474a0e89eaae95. 2023-07-13 03:16:21,894 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689218181724.f5c4e050bfbe648d78474a0e89eaae95. 2023-07-13 03:16:21,894 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for f5c4e050bfbe648d78474a0e89eaae95: 2023-07-13 03:16:21,917 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689218181724.8f281f2d9b408fa8823ecad1982f2904.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 03:16:21,917 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 8f281f2d9b408fa8823ecad1982f2904, disabling compactions & flushes 2023-07-13 03:16:21,917 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689218181724.8f281f2d9b408fa8823ecad1982f2904. 2023-07-13 03:16:21,918 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689218181724.8f281f2d9b408fa8823ecad1982f2904. 2023-07-13 03:16:21,918 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689218181724.8f281f2d9b408fa8823ecad1982f2904. after waiting 0 ms 2023-07-13 03:16:21,918 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689218181724.8f281f2d9b408fa8823ecad1982f2904. 2023-07-13 03:16:21,918 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689218181724.8f281f2d9b408fa8823ecad1982f2904. 2023-07-13 03:16:21,918 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 8f281f2d9b408fa8823ecad1982f2904: 2023-07-13 03:16:21,919 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689218181724.01ed4b356b529c42f25f4ba67ec393be.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 03:16:21,920 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 01ed4b356b529c42f25f4ba67ec393be, disabling compactions & flushes 2023-07-13 03:16:21,920 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689218181724.01ed4b356b529c42f25f4ba67ec393be. 2023-07-13 03:16:21,920 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689218181724.01ed4b356b529c42f25f4ba67ec393be. 2023-07-13 03:16:21,920 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689218181724.01ed4b356b529c42f25f4ba67ec393be. after waiting 0 ms 2023-07-13 03:16:21,920 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689218181724.01ed4b356b529c42f25f4ba67ec393be. 2023-07-13 03:16:21,920 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689218181724.01ed4b356b529c42f25f4ba67ec393be. 2023-07-13 03:16:21,920 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 01ed4b356b529c42f25f4ba67ec393be: 2023-07-13 03:16:21,924 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689218181724.35e74b4e8c3e0b1790a78551ec92314c.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689218181924"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689218181924"}]},"ts":"1689218181924"} 2023-07-13 03:16:21,925 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689218181724.7249b32725a4e7fd2b734510957ddb0a.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689218181924"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689218181924"}]},"ts":"1689218181924"} 2023-07-13 03:16:21,925 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689218181724.f5c4e050bfbe648d78474a0e89eaae95.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689218181924"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689218181924"}]},"ts":"1689218181924"} 2023-07-13 03:16:21,925 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689218181724.8f281f2d9b408fa8823ecad1982f2904.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689218181924"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689218181924"}]},"ts":"1689218181924"} 2023-07-13 03:16:21,925 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689218181724.01ed4b356b529c42f25f4ba67ec393be.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689218181924"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689218181924"}]},"ts":"1689218181924"} 2023-07-13 03:16:21,930 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-13 03:16:21,932 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689218181931"}]},"ts":"1689218181931"} 2023-07-13 03:16:21,934 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLING in hbase:meta 2023-07-13 03:16:21,937 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-13 03:16:21,937 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-13 03:16:21,937 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-13 03:16:21,937 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-13 03:16:21,938 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=59, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=35e74b4e8c3e0b1790a78551ec92314c, ASSIGN}, {pid=60, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7249b32725a4e7fd2b734510957ddb0a, ASSIGN}, {pid=61, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f5c4e050bfbe648d78474a0e89eaae95, ASSIGN}, {pid=62, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=01ed4b356b529c42f25f4ba67ec393be, ASSIGN}, {pid=63, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8f281f2d9b408fa8823ecad1982f2904, ASSIGN}] 2023-07-13 03:16:21,940 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=60, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7249b32725a4e7fd2b734510957ddb0a, ASSIGN 2023-07-13 03:16:21,940 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=59, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=35e74b4e8c3e0b1790a78551ec92314c, ASSIGN 2023-07-13 03:16:21,940 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=61, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f5c4e050bfbe648d78474a0e89eaae95, ASSIGN 2023-07-13 03:16:21,940 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=63, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8f281f2d9b408fa8823ecad1982f2904, ASSIGN 2023-07-13 03:16:21,941 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=62, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=01ed4b356b529c42f25f4ba67ec393be, ASSIGN 2023-07-13 03:16:21,942 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=61, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f5c4e050bfbe648d78474a0e89eaae95, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,37181,1689218172183; forceNewPlan=false, retain=false 2023-07-13 03:16:21,942 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=59, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=35e74b4e8c3e0b1790a78551ec92314c, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,37181,1689218172183; forceNewPlan=false, retain=false 2023-07-13 03:16:21,942 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=60, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7249b32725a4e7fd2b734510957ddb0a, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,32993,1689218172776; forceNewPlan=false, retain=false 2023-07-13 03:16:21,942 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=63, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8f281f2d9b408fa8823ecad1982f2904, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,32993,1689218172776; forceNewPlan=false, retain=false 2023-07-13 03:16:21,942 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=62, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=01ed4b356b529c42f25f4ba67ec393be, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,37181,1689218172183; forceNewPlan=false, retain=false 2023-07-13 03:16:21,999 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(1230): Checking to see if procedure is done pid=58 2023-07-13 03:16:22,092 INFO [jenkins-hbase20:33491] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-13 03:16:22,097 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=62 updating hbase:meta row=01ed4b356b529c42f25f4ba67ec393be, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,37181,1689218172183 2023-07-13 03:16:22,098 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689218181724.01ed4b356b529c42f25f4ba67ec393be.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689218182097"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689218182097"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689218182097"}]},"ts":"1689218182097"} 2023-07-13 03:16:22,098 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=61 updating hbase:meta row=f5c4e050bfbe648d78474a0e89eaae95, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,37181,1689218172183 2023-07-13 03:16:22,098 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=59 updating hbase:meta row=35e74b4e8c3e0b1790a78551ec92314c, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,37181,1689218172183 2023-07-13 03:16:22,098 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689218181724.f5c4e050bfbe648d78474a0e89eaae95.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689218182098"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689218182098"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689218182098"}]},"ts":"1689218182098"} 2023-07-13 03:16:22,098 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689218181724.35e74b4e8c3e0b1790a78551ec92314c.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689218182098"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689218182098"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689218182098"}]},"ts":"1689218182098"} 2023-07-13 03:16:22,099 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=60 updating hbase:meta row=7249b32725a4e7fd2b734510957ddb0a, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,32993,1689218172776 2023-07-13 03:16:22,099 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=63 updating hbase:meta row=8f281f2d9b408fa8823ecad1982f2904, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,32993,1689218172776 2023-07-13 03:16:22,099 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689218181724.7249b32725a4e7fd2b734510957ddb0a.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689218182099"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689218182099"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689218182099"}]},"ts":"1689218182099"} 2023-07-13 03:16:22,099 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689218181724.8f281f2d9b408fa8823ecad1982f2904.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689218182099"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689218182099"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689218182099"}]},"ts":"1689218182099"} 2023-07-13 03:16:22,101 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=64, ppid=62, state=RUNNABLE; OpenRegionProcedure 01ed4b356b529c42f25f4ba67ec393be, server=jenkins-hbase20.apache.org,37181,1689218172183}] 2023-07-13 03:16:22,102 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=65, ppid=61, state=RUNNABLE; OpenRegionProcedure f5c4e050bfbe648d78474a0e89eaae95, server=jenkins-hbase20.apache.org,37181,1689218172183}] 2023-07-13 03:16:22,104 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=66, ppid=59, state=RUNNABLE; OpenRegionProcedure 35e74b4e8c3e0b1790a78551ec92314c, server=jenkins-hbase20.apache.org,37181,1689218172183}] 2023-07-13 03:16:22,110 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=67, ppid=60, state=RUNNABLE; OpenRegionProcedure 7249b32725a4e7fd2b734510957ddb0a, server=jenkins-hbase20.apache.org,32993,1689218172776}] 2023-07-13 03:16:22,112 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=68, ppid=63, state=RUNNABLE; OpenRegionProcedure 8f281f2d9b408fa8823ecad1982f2904, server=jenkins-hbase20.apache.org,32993,1689218172776}] 2023-07-13 03:16:22,261 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689218181724.f5c4e050bfbe648d78474a0e89eaae95. 2023-07-13 03:16:22,261 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f5c4e050bfbe648d78474a0e89eaae95, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689218181724.f5c4e050bfbe648d78474a0e89eaae95.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-13 03:16:22,262 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop f5c4e050bfbe648d78474a0e89eaae95 2023-07-13 03:16:22,262 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689218181724.f5c4e050bfbe648d78474a0e89eaae95.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 03:16:22,262 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for f5c4e050bfbe648d78474a0e89eaae95 2023-07-13 03:16:22,262 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for f5c4e050bfbe648d78474a0e89eaae95 2023-07-13 03:16:22,264 INFO [StoreOpener-f5c4e050bfbe648d78474a0e89eaae95-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region f5c4e050bfbe648d78474a0e89eaae95 2023-07-13 03:16:22,266 DEBUG [StoreOpener-f5c4e050bfbe648d78474a0e89eaae95-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testTableMoveTruncateAndDrop/f5c4e050bfbe648d78474a0e89eaae95/f 2023-07-13 03:16:22,266 DEBUG [StoreOpener-f5c4e050bfbe648d78474a0e89eaae95-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testTableMoveTruncateAndDrop/f5c4e050bfbe648d78474a0e89eaae95/f 2023-07-13 03:16:22,267 INFO [StoreOpener-f5c4e050bfbe648d78474a0e89eaae95-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f5c4e050bfbe648d78474a0e89eaae95 columnFamilyName f 2023-07-13 03:16:22,267 INFO [StoreOpener-f5c4e050bfbe648d78474a0e89eaae95-1] regionserver.HStore(310): Store=f5c4e050bfbe648d78474a0e89eaae95/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 03:16:22,269 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testTableMoveTruncateAndDrop/f5c4e050bfbe648d78474a0e89eaae95 2023-07-13 03:16:22,269 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689218181724.7249b32725a4e7fd2b734510957ddb0a. 2023-07-13 03:16:22,269 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 7249b32725a4e7fd2b734510957ddb0a, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689218181724.7249b32725a4e7fd2b734510957ddb0a.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-13 03:16:22,269 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testTableMoveTruncateAndDrop/f5c4e050bfbe648d78474a0e89eaae95 2023-07-13 03:16:22,270 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 7249b32725a4e7fd2b734510957ddb0a 2023-07-13 03:16:22,270 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689218181724.7249b32725a4e7fd2b734510957ddb0a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 03:16:22,270 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 7249b32725a4e7fd2b734510957ddb0a 2023-07-13 03:16:22,270 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 7249b32725a4e7fd2b734510957ddb0a 2023-07-13 03:16:22,271 INFO [StoreOpener-7249b32725a4e7fd2b734510957ddb0a-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 7249b32725a4e7fd2b734510957ddb0a 2023-07-13 03:16:22,273 DEBUG [StoreOpener-7249b32725a4e7fd2b734510957ddb0a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testTableMoveTruncateAndDrop/7249b32725a4e7fd2b734510957ddb0a/f 2023-07-13 03:16:22,273 DEBUG [StoreOpener-7249b32725a4e7fd2b734510957ddb0a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testTableMoveTruncateAndDrop/7249b32725a4e7fd2b734510957ddb0a/f 2023-07-13 03:16:22,274 INFO [StoreOpener-7249b32725a4e7fd2b734510957ddb0a-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 7249b32725a4e7fd2b734510957ddb0a columnFamilyName f 2023-07-13 03:16:22,275 INFO [StoreOpener-7249b32725a4e7fd2b734510957ddb0a-1] regionserver.HStore(310): Store=7249b32725a4e7fd2b734510957ddb0a/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 03:16:22,275 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for f5c4e050bfbe648d78474a0e89eaae95 2023-07-13 03:16:22,276 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testTableMoveTruncateAndDrop/7249b32725a4e7fd2b734510957ddb0a 2023-07-13 03:16:22,276 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testTableMoveTruncateAndDrop/7249b32725a4e7fd2b734510957ddb0a 2023-07-13 03:16:22,279 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testTableMoveTruncateAndDrop/f5c4e050bfbe648d78474a0e89eaae95/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 03:16:22,281 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 7249b32725a4e7fd2b734510957ddb0a 2023-07-13 03:16:22,282 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened f5c4e050bfbe648d78474a0e89eaae95; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9524674880, jitterRate=-0.11294552683830261}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 03:16:22,282 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for f5c4e050bfbe648d78474a0e89eaae95: 2023-07-13 03:16:22,283 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689218181724.f5c4e050bfbe648d78474a0e89eaae95., pid=65, masterSystemTime=1689218182256 2023-07-13 03:16:22,289 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testTableMoveTruncateAndDrop/7249b32725a4e7fd2b734510957ddb0a/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 03:16:22,291 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689218181724.f5c4e050bfbe648d78474a0e89eaae95. 2023-07-13 03:16:22,291 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689218181724.f5c4e050bfbe648d78474a0e89eaae95. 2023-07-13 03:16:22,292 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689218181724.01ed4b356b529c42f25f4ba67ec393be. 2023-07-13 03:16:22,292 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 01ed4b356b529c42f25f4ba67ec393be, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689218181724.01ed4b356b529c42f25f4ba67ec393be.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-13 03:16:22,292 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 01ed4b356b529c42f25f4ba67ec393be 2023-07-13 03:16:22,292 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689218181724.01ed4b356b529c42f25f4ba67ec393be.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 03:16:22,292 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 01ed4b356b529c42f25f4ba67ec393be 2023-07-13 03:16:22,292 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 01ed4b356b529c42f25f4ba67ec393be 2023-07-13 03:16:22,293 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 7249b32725a4e7fd2b734510957ddb0a; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11027927200, jitterRate=0.027055755257606506}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 03:16:22,294 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 7249b32725a4e7fd2b734510957ddb0a: 2023-07-13 03:16:22,294 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=61 updating hbase:meta row=f5c4e050bfbe648d78474a0e89eaae95, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,37181,1689218172183 2023-07-13 03:16:22,294 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689218181724.f5c4e050bfbe648d78474a0e89eaae95.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689218182294"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689218182294"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689218182294"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689218182294"}]},"ts":"1689218182294"} 2023-07-13 03:16:22,295 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689218181724.7249b32725a4e7fd2b734510957ddb0a., pid=67, masterSystemTime=1689218182265 2023-07-13 03:16:22,299 INFO [StoreOpener-01ed4b356b529c42f25f4ba67ec393be-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 01ed4b356b529c42f25f4ba67ec393be 2023-07-13 03:16:22,301 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(1230): Checking to see if procedure is done pid=58 2023-07-13 03:16:22,303 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689218181724.7249b32725a4e7fd2b734510957ddb0a. 2023-07-13 03:16:22,303 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689218181724.7249b32725a4e7fd2b734510957ddb0a. 2023-07-13 03:16:22,303 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689218181724.8f281f2d9b408fa8823ecad1982f2904. 2023-07-13 03:16:22,303 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 8f281f2d9b408fa8823ecad1982f2904, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689218181724.8f281f2d9b408fa8823ecad1982f2904.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-13 03:16:22,304 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 8f281f2d9b408fa8823ecad1982f2904 2023-07-13 03:16:22,304 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689218181724.8f281f2d9b408fa8823ecad1982f2904.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 03:16:22,304 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 8f281f2d9b408fa8823ecad1982f2904 2023-07-13 03:16:22,304 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 8f281f2d9b408fa8823ecad1982f2904 2023-07-13 03:16:22,304 DEBUG [StoreOpener-01ed4b356b529c42f25f4ba67ec393be-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testTableMoveTruncateAndDrop/01ed4b356b529c42f25f4ba67ec393be/f 2023-07-13 03:16:22,304 DEBUG [StoreOpener-01ed4b356b529c42f25f4ba67ec393be-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testTableMoveTruncateAndDrop/01ed4b356b529c42f25f4ba67ec393be/f 2023-07-13 03:16:22,305 INFO [StoreOpener-01ed4b356b529c42f25f4ba67ec393be-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 01ed4b356b529c42f25f4ba67ec393be columnFamilyName f 2023-07-13 03:16:22,305 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=60 updating hbase:meta row=7249b32725a4e7fd2b734510957ddb0a, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,32993,1689218172776 2023-07-13 03:16:22,306 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689218181724.7249b32725a4e7fd2b734510957ddb0a.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689218182305"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689218182305"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689218182305"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689218182305"}]},"ts":"1689218182305"} 2023-07-13 03:16:22,306 INFO [StoreOpener-01ed4b356b529c42f25f4ba67ec393be-1] regionserver.HStore(310): Store=01ed4b356b529c42f25f4ba67ec393be/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 03:16:22,309 INFO [StoreOpener-8f281f2d9b408fa8823ecad1982f2904-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 8f281f2d9b408fa8823ecad1982f2904 2023-07-13 03:16:22,310 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testTableMoveTruncateAndDrop/01ed4b356b529c42f25f4ba67ec393be 2023-07-13 03:16:22,310 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testTableMoveTruncateAndDrop/01ed4b356b529c42f25f4ba67ec393be 2023-07-13 03:16:22,311 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=65, resume processing ppid=61 2023-07-13 03:16:22,311 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=65, ppid=61, state=SUCCESS; OpenRegionProcedure f5c4e050bfbe648d78474a0e89eaae95, server=jenkins-hbase20.apache.org,37181,1689218172183 in 195 msec 2023-07-13 03:16:22,311 DEBUG [StoreOpener-8f281f2d9b408fa8823ecad1982f2904-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testTableMoveTruncateAndDrop/8f281f2d9b408fa8823ecad1982f2904/f 2023-07-13 03:16:22,311 DEBUG [StoreOpener-8f281f2d9b408fa8823ecad1982f2904-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testTableMoveTruncateAndDrop/8f281f2d9b408fa8823ecad1982f2904/f 2023-07-13 03:16:22,312 INFO [StoreOpener-8f281f2d9b408fa8823ecad1982f2904-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 8f281f2d9b408fa8823ecad1982f2904 columnFamilyName f 2023-07-13 03:16:22,312 INFO [StoreOpener-8f281f2d9b408fa8823ecad1982f2904-1] regionserver.HStore(310): Store=8f281f2d9b408fa8823ecad1982f2904/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 03:16:22,313 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=67, resume processing ppid=60 2023-07-13 03:16:22,313 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=61, ppid=58, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f5c4e050bfbe648d78474a0e89eaae95, ASSIGN in 374 msec 2023-07-13 03:16:22,313 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=67, ppid=60, state=SUCCESS; OpenRegionProcedure 7249b32725a4e7fd2b734510957ddb0a, server=jenkins-hbase20.apache.org,32993,1689218172776 in 202 msec 2023-07-13 03:16:22,313 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testTableMoveTruncateAndDrop/8f281f2d9b408fa8823ecad1982f2904 2023-07-13 03:16:22,314 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testTableMoveTruncateAndDrop/8f281f2d9b408fa8823ecad1982f2904 2023-07-13 03:16:22,314 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 01ed4b356b529c42f25f4ba67ec393be 2023-07-13 03:16:22,315 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=60, ppid=58, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7249b32725a4e7fd2b734510957ddb0a, ASSIGN in 376 msec 2023-07-13 03:16:22,317 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testTableMoveTruncateAndDrop/01ed4b356b529c42f25f4ba67ec393be/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 03:16:22,317 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 8f281f2d9b408fa8823ecad1982f2904 2023-07-13 03:16:22,317 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 01ed4b356b529c42f25f4ba67ec393be; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10462486080, jitterRate=-0.025605052709579468}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 03:16:22,317 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 01ed4b356b529c42f25f4ba67ec393be: 2023-07-13 03:16:22,318 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689218181724.01ed4b356b529c42f25f4ba67ec393be., pid=64, masterSystemTime=1689218182256 2023-07-13 03:16:22,319 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testTableMoveTruncateAndDrop/8f281f2d9b408fa8823ecad1982f2904/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 03:16:22,320 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 8f281f2d9b408fa8823ecad1982f2904; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9729166560, jitterRate=-0.09390075504779816}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 03:16:22,320 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 8f281f2d9b408fa8823ecad1982f2904: 2023-07-13 03:16:22,320 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689218181724.01ed4b356b529c42f25f4ba67ec393be. 2023-07-13 03:16:22,320 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689218181724.01ed4b356b529c42f25f4ba67ec393be. 2023-07-13 03:16:22,321 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689218181724.35e74b4e8c3e0b1790a78551ec92314c. 2023-07-13 03:16:22,321 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 35e74b4e8c3e0b1790a78551ec92314c, NAME => 'Group_testTableMoveTruncateAndDrop,,1689218181724.35e74b4e8c3e0b1790a78551ec92314c.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-13 03:16:22,321 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689218181724.8f281f2d9b408fa8823ecad1982f2904., pid=68, masterSystemTime=1689218182265 2023-07-13 03:16:22,321 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=62 updating hbase:meta row=01ed4b356b529c42f25f4ba67ec393be, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,37181,1689218172183 2023-07-13 03:16:22,321 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689218181724.01ed4b356b529c42f25f4ba67ec393be.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689218182321"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689218182321"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689218182321"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689218182321"}]},"ts":"1689218182321"} 2023-07-13 03:16:22,321 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 35e74b4e8c3e0b1790a78551ec92314c 2023-07-13 03:16:22,321 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689218181724.35e74b4e8c3e0b1790a78551ec92314c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 03:16:22,321 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 35e74b4e8c3e0b1790a78551ec92314c 2023-07-13 03:16:22,321 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 35e74b4e8c3e0b1790a78551ec92314c 2023-07-13 03:16:22,323 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689218181724.8f281f2d9b408fa8823ecad1982f2904. 2023-07-13 03:16:22,323 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689218181724.8f281f2d9b408fa8823ecad1982f2904. 2023-07-13 03:16:22,323 INFO [StoreOpener-35e74b4e8c3e0b1790a78551ec92314c-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 35e74b4e8c3e0b1790a78551ec92314c 2023-07-13 03:16:22,323 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=63 updating hbase:meta row=8f281f2d9b408fa8823ecad1982f2904, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,32993,1689218172776 2023-07-13 03:16:22,323 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689218181724.8f281f2d9b408fa8823ecad1982f2904.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689218182323"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689218182323"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689218182323"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689218182323"}]},"ts":"1689218182323"} 2023-07-13 03:16:22,325 DEBUG [StoreOpener-35e74b4e8c3e0b1790a78551ec92314c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testTableMoveTruncateAndDrop/35e74b4e8c3e0b1790a78551ec92314c/f 2023-07-13 03:16:22,326 DEBUG [StoreOpener-35e74b4e8c3e0b1790a78551ec92314c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testTableMoveTruncateAndDrop/35e74b4e8c3e0b1790a78551ec92314c/f 2023-07-13 03:16:22,326 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=64, resume processing ppid=62 2023-07-13 03:16:22,326 INFO [StoreOpener-35e74b4e8c3e0b1790a78551ec92314c-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 35e74b4e8c3e0b1790a78551ec92314c columnFamilyName f 2023-07-13 03:16:22,326 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=64, ppid=62, state=SUCCESS; OpenRegionProcedure 01ed4b356b529c42f25f4ba67ec393be, server=jenkins-hbase20.apache.org,37181,1689218172183 in 222 msec 2023-07-13 03:16:22,327 INFO [StoreOpener-35e74b4e8c3e0b1790a78551ec92314c-1] regionserver.HStore(310): Store=35e74b4e8c3e0b1790a78551ec92314c/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 03:16:22,328 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=68, resume processing ppid=63 2023-07-13 03:16:22,328 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=62, ppid=58, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=01ed4b356b529c42f25f4ba67ec393be, ASSIGN in 389 msec 2023-07-13 03:16:22,328 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=68, ppid=63, state=SUCCESS; OpenRegionProcedure 8f281f2d9b408fa8823ecad1982f2904, server=jenkins-hbase20.apache.org,32993,1689218172776 in 214 msec 2023-07-13 03:16:22,329 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testTableMoveTruncateAndDrop/35e74b4e8c3e0b1790a78551ec92314c 2023-07-13 03:16:22,329 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testTableMoveTruncateAndDrop/35e74b4e8c3e0b1790a78551ec92314c 2023-07-13 03:16:22,330 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=63, ppid=58, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8f281f2d9b408fa8823ecad1982f2904, ASSIGN in 392 msec 2023-07-13 03:16:22,332 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 35e74b4e8c3e0b1790a78551ec92314c 2023-07-13 03:16:22,335 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testTableMoveTruncateAndDrop/35e74b4e8c3e0b1790a78551ec92314c/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 03:16:22,336 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 35e74b4e8c3e0b1790a78551ec92314c; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11670275680, jitterRate=0.08687911927700043}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 03:16:22,336 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 35e74b4e8c3e0b1790a78551ec92314c: 2023-07-13 03:16:22,337 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689218181724.35e74b4e8c3e0b1790a78551ec92314c., pid=66, masterSystemTime=1689218182256 2023-07-13 03:16:22,338 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689218181724.35e74b4e8c3e0b1790a78551ec92314c. 2023-07-13 03:16:22,339 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689218181724.35e74b4e8c3e0b1790a78551ec92314c. 2023-07-13 03:16:22,339 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=59 updating hbase:meta row=35e74b4e8c3e0b1790a78551ec92314c, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,37181,1689218172183 2023-07-13 03:16:22,339 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689218181724.35e74b4e8c3e0b1790a78551ec92314c.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689218182339"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689218182339"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689218182339"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689218182339"}]},"ts":"1689218182339"} 2023-07-13 03:16:22,346 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=66, resume processing ppid=59 2023-07-13 03:16:22,346 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=66, ppid=59, state=SUCCESS; OpenRegionProcedure 35e74b4e8c3e0b1790a78551ec92314c, server=jenkins-hbase20.apache.org,37181,1689218172183 in 237 msec 2023-07-13 03:16:22,348 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=59, resume processing ppid=58 2023-07-13 03:16:22,348 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=59, ppid=58, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=35e74b4e8c3e0b1790a78551ec92314c, ASSIGN in 409 msec 2023-07-13 03:16:22,348 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689218182348"}]},"ts":"1689218182348"} 2023-07-13 03:16:22,350 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLED in hbase:meta 2023-07-13 03:16:22,351 DEBUG [PEWorker-5] procedure.TruncateTableProcedure(145): truncate 'Group_testTableMoveTruncateAndDrop' completed 2023-07-13 03:16:22,353 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=58, state=SUCCESS; TruncateTableProcedure (table=Group_testTableMoveTruncateAndDrop preserveSplits=true) in 673 msec 2023-07-13 03:16:22,803 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(1230): Checking to see if procedure is done pid=58 2023-07-13 03:16:22,803 INFO [Listener at localhost.localdomain/36261] client.HBaseAdmin$TableFuture(3541): Operation: TRUNCATE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 58 completed 2023-07-13 03:16:22,804 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_1739069322 2023-07-13 03:16:22,804 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 03:16:22,806 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_1739069322 2023-07-13 03:16:22,807 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 03:16:22,808 INFO [Listener at localhost.localdomain/36261] client.HBaseAdmin$15(890): Started disable of Group_testTableMoveTruncateAndDrop 2023-07-13 03:16:22,808 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.HMaster$11(2418): Client=jenkins//148.251.75.209 disable Group_testTableMoveTruncateAndDrop 2023-07-13 03:16:22,809 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] procedure2.ProcedureExecutor(1029): Stored pid=69, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-13 03:16:22,822 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(1230): Checking to see if procedure is done pid=69 2023-07-13 03:16:22,823 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689218182823"}]},"ts":"1689218182823"} 2023-07-13 03:16:22,825 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLING in hbase:meta 2023-07-13 03:16:22,827 INFO [PEWorker-1] procedure.DisableTableProcedure(293): Set Group_testTableMoveTruncateAndDrop to state=DISABLING 2023-07-13 03:16:22,829 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=70, ppid=69, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=35e74b4e8c3e0b1790a78551ec92314c, UNASSIGN}, {pid=71, ppid=69, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7249b32725a4e7fd2b734510957ddb0a, UNASSIGN}, {pid=72, ppid=69, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f5c4e050bfbe648d78474a0e89eaae95, UNASSIGN}, {pid=73, ppid=69, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=01ed4b356b529c42f25f4ba67ec393be, UNASSIGN}, {pid=74, ppid=69, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8f281f2d9b408fa8823ecad1982f2904, UNASSIGN}] 2023-07-13 03:16:22,831 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=74, ppid=69, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8f281f2d9b408fa8823ecad1982f2904, UNASSIGN 2023-07-13 03:16:22,833 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=73, ppid=69, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=01ed4b356b529c42f25f4ba67ec393be, UNASSIGN 2023-07-13 03:16:22,833 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=71, ppid=69, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7249b32725a4e7fd2b734510957ddb0a, UNASSIGN 2023-07-13 03:16:22,834 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=70, ppid=69, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=35e74b4e8c3e0b1790a78551ec92314c, UNASSIGN 2023-07-13 03:16:22,834 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=72, ppid=69, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f5c4e050bfbe648d78474a0e89eaae95, UNASSIGN 2023-07-13 03:16:22,838 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=74 updating hbase:meta row=8f281f2d9b408fa8823ecad1982f2904, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,32993,1689218172776 2023-07-13 03:16:22,839 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689218181724.8f281f2d9b408fa8823ecad1982f2904.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689218182838"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689218182838"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689218182838"}]},"ts":"1689218182838"} 2023-07-13 03:16:22,839 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=73 updating hbase:meta row=01ed4b356b529c42f25f4ba67ec393be, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,37181,1689218172183 2023-07-13 03:16:22,839 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689218181724.01ed4b356b529c42f25f4ba67ec393be.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689218182839"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689218182839"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689218182839"}]},"ts":"1689218182839"} 2023-07-13 03:16:22,840 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=72 updating hbase:meta row=f5c4e050bfbe648d78474a0e89eaae95, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,37181,1689218172183 2023-07-13 03:16:22,840 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=71 updating hbase:meta row=7249b32725a4e7fd2b734510957ddb0a, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,32993,1689218172776 2023-07-13 03:16:22,840 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689218181724.f5c4e050bfbe648d78474a0e89eaae95.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689218182840"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689218182840"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689218182840"}]},"ts":"1689218182840"} 2023-07-13 03:16:22,840 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689218181724.7249b32725a4e7fd2b734510957ddb0a.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689218182840"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689218182840"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689218182840"}]},"ts":"1689218182840"} 2023-07-13 03:16:22,840 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=70 updating hbase:meta row=35e74b4e8c3e0b1790a78551ec92314c, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,37181,1689218172183 2023-07-13 03:16:22,841 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689218181724.35e74b4e8c3e0b1790a78551ec92314c.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689218182840"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689218182840"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689218182840"}]},"ts":"1689218182840"} 2023-07-13 03:16:22,845 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=75, ppid=74, state=RUNNABLE; CloseRegionProcedure 8f281f2d9b408fa8823ecad1982f2904, server=jenkins-hbase20.apache.org,32993,1689218172776}] 2023-07-13 03:16:22,848 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=76, ppid=73, state=RUNNABLE; CloseRegionProcedure 01ed4b356b529c42f25f4ba67ec393be, server=jenkins-hbase20.apache.org,37181,1689218172183}] 2023-07-13 03:16:22,849 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=77, ppid=72, state=RUNNABLE; CloseRegionProcedure f5c4e050bfbe648d78474a0e89eaae95, server=jenkins-hbase20.apache.org,37181,1689218172183}] 2023-07-13 03:16:22,851 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=78, ppid=71, state=RUNNABLE; CloseRegionProcedure 7249b32725a4e7fd2b734510957ddb0a, server=jenkins-hbase20.apache.org,32993,1689218172776}] 2023-07-13 03:16:22,852 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=79, ppid=70, state=RUNNABLE; CloseRegionProcedure 35e74b4e8c3e0b1790a78551ec92314c, server=jenkins-hbase20.apache.org,37181,1689218172183}] 2023-07-13 03:16:22,923 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(1230): Checking to see if procedure is done pid=69 2023-07-13 03:16:22,998 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 7249b32725a4e7fd2b734510957ddb0a 2023-07-13 03:16:22,999 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 7249b32725a4e7fd2b734510957ddb0a, disabling compactions & flushes 2023-07-13 03:16:23,000 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689218181724.7249b32725a4e7fd2b734510957ddb0a. 2023-07-13 03:16:23,000 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689218181724.7249b32725a4e7fd2b734510957ddb0a. 2023-07-13 03:16:23,000 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689218181724.7249b32725a4e7fd2b734510957ddb0a. after waiting 0 ms 2023-07-13 03:16:23,000 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689218181724.7249b32725a4e7fd2b734510957ddb0a. 2023-07-13 03:16:23,003 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close f5c4e050bfbe648d78474a0e89eaae95 2023-07-13 03:16:23,004 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing f5c4e050bfbe648d78474a0e89eaae95, disabling compactions & flushes 2023-07-13 03:16:23,004 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689218181724.f5c4e050bfbe648d78474a0e89eaae95. 2023-07-13 03:16:23,004 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689218181724.f5c4e050bfbe648d78474a0e89eaae95. 2023-07-13 03:16:23,004 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689218181724.f5c4e050bfbe648d78474a0e89eaae95. after waiting 0 ms 2023-07-13 03:16:23,004 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689218181724.f5c4e050bfbe648d78474a0e89eaae95. 2023-07-13 03:16:23,005 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testTableMoveTruncateAndDrop/7249b32725a4e7fd2b734510957ddb0a/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-13 03:16:23,006 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689218181724.7249b32725a4e7fd2b734510957ddb0a. 2023-07-13 03:16:23,006 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 7249b32725a4e7fd2b734510957ddb0a: 2023-07-13 03:16:23,008 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 7249b32725a4e7fd2b734510957ddb0a 2023-07-13 03:16:23,008 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testTableMoveTruncateAndDrop/f5c4e050bfbe648d78474a0e89eaae95/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-13 03:16:23,008 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 8f281f2d9b408fa8823ecad1982f2904 2023-07-13 03:16:23,009 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 8f281f2d9b408fa8823ecad1982f2904, disabling compactions & flushes 2023-07-13 03:16:23,009 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689218181724.8f281f2d9b408fa8823ecad1982f2904. 2023-07-13 03:16:23,009 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=71 updating hbase:meta row=7249b32725a4e7fd2b734510957ddb0a, regionState=CLOSED 2023-07-13 03:16:23,009 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689218181724.8f281f2d9b408fa8823ecad1982f2904. 2023-07-13 03:16:23,009 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689218181724.8f281f2d9b408fa8823ecad1982f2904. after waiting 0 ms 2023-07-13 03:16:23,009 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689218181724.7249b32725a4e7fd2b734510957ddb0a.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689218183009"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689218183009"}]},"ts":"1689218183009"} 2023-07-13 03:16:23,009 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689218181724.8f281f2d9b408fa8823ecad1982f2904. 2023-07-13 03:16:23,010 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689218181724.f5c4e050bfbe648d78474a0e89eaae95. 2023-07-13 03:16:23,010 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for f5c4e050bfbe648d78474a0e89eaae95: 2023-07-13 03:16:23,012 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed f5c4e050bfbe648d78474a0e89eaae95 2023-07-13 03:16:23,012 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 01ed4b356b529c42f25f4ba67ec393be 2023-07-13 03:16:23,013 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 01ed4b356b529c42f25f4ba67ec393be, disabling compactions & flushes 2023-07-13 03:16:23,013 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689218181724.01ed4b356b529c42f25f4ba67ec393be. 2023-07-13 03:16:23,013 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689218181724.01ed4b356b529c42f25f4ba67ec393be. 2023-07-13 03:16:23,013 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689218181724.01ed4b356b529c42f25f4ba67ec393be. after waiting 0 ms 2023-07-13 03:16:23,013 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689218181724.01ed4b356b529c42f25f4ba67ec393be. 2023-07-13 03:16:23,013 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=72 updating hbase:meta row=f5c4e050bfbe648d78474a0e89eaae95, regionState=CLOSED 2023-07-13 03:16:23,014 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689218181724.f5c4e050bfbe648d78474a0e89eaae95.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689218183013"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689218183013"}]},"ts":"1689218183013"} 2023-07-13 03:16:23,015 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=78, resume processing ppid=71 2023-07-13 03:16:23,016 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=78, ppid=71, state=SUCCESS; CloseRegionProcedure 7249b32725a4e7fd2b734510957ddb0a, server=jenkins-hbase20.apache.org,32993,1689218172776 in 161 msec 2023-07-13 03:16:23,017 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=71, ppid=69, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7249b32725a4e7fd2b734510957ddb0a, UNASSIGN in 186 msec 2023-07-13 03:16:23,017 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testTableMoveTruncateAndDrop/8f281f2d9b408fa8823ecad1982f2904/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-13 03:16:23,018 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689218181724.8f281f2d9b408fa8823ecad1982f2904. 2023-07-13 03:16:23,018 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 8f281f2d9b408fa8823ecad1982f2904: 2023-07-13 03:16:23,020 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=77, resume processing ppid=72 2023-07-13 03:16:23,020 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=77, ppid=72, state=SUCCESS; CloseRegionProcedure f5c4e050bfbe648d78474a0e89eaae95, server=jenkins-hbase20.apache.org,37181,1689218172183 in 168 msec 2023-07-13 03:16:23,020 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 8f281f2d9b408fa8823ecad1982f2904 2023-07-13 03:16:23,021 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testTableMoveTruncateAndDrop/01ed4b356b529c42f25f4ba67ec393be/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-13 03:16:23,021 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=74 updating hbase:meta row=8f281f2d9b408fa8823ecad1982f2904, regionState=CLOSED 2023-07-13 03:16:23,021 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689218181724.8f281f2d9b408fa8823ecad1982f2904.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689218183021"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689218183021"}]},"ts":"1689218183021"} 2023-07-13 03:16:23,022 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689218181724.01ed4b356b529c42f25f4ba67ec393be. 2023-07-13 03:16:23,022 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 01ed4b356b529c42f25f4ba67ec393be: 2023-07-13 03:16:23,022 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=72, ppid=69, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f5c4e050bfbe648d78474a0e89eaae95, UNASSIGN in 191 msec 2023-07-13 03:16:23,024 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 01ed4b356b529c42f25f4ba67ec393be 2023-07-13 03:16:23,024 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 35e74b4e8c3e0b1790a78551ec92314c 2023-07-13 03:16:23,025 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 35e74b4e8c3e0b1790a78551ec92314c, disabling compactions & flushes 2023-07-13 03:16:23,025 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689218181724.35e74b4e8c3e0b1790a78551ec92314c. 2023-07-13 03:16:23,025 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689218181724.35e74b4e8c3e0b1790a78551ec92314c. 2023-07-13 03:16:23,025 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689218181724.35e74b4e8c3e0b1790a78551ec92314c. after waiting 0 ms 2023-07-13 03:16:23,025 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689218181724.35e74b4e8c3e0b1790a78551ec92314c. 2023-07-13 03:16:23,025 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=73 updating hbase:meta row=01ed4b356b529c42f25f4ba67ec393be, regionState=CLOSED 2023-07-13 03:16:23,025 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689218181724.01ed4b356b529c42f25f4ba67ec393be.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689218183025"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689218183025"}]},"ts":"1689218183025"} 2023-07-13 03:16:23,026 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=75, resume processing ppid=74 2023-07-13 03:16:23,026 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=75, ppid=74, state=SUCCESS; CloseRegionProcedure 8f281f2d9b408fa8823ecad1982f2904, server=jenkins-hbase20.apache.org,32993,1689218172776 in 178 msec 2023-07-13 03:16:23,028 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=74, ppid=69, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8f281f2d9b408fa8823ecad1982f2904, UNASSIGN in 197 msec 2023-07-13 03:16:23,029 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=76, resume processing ppid=73 2023-07-13 03:16:23,029 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=76, ppid=73, state=SUCCESS; CloseRegionProcedure 01ed4b356b529c42f25f4ba67ec393be, server=jenkins-hbase20.apache.org,37181,1689218172183 in 180 msec 2023-07-13 03:16:23,030 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testTableMoveTruncateAndDrop/35e74b4e8c3e0b1790a78551ec92314c/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-13 03:16:23,031 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689218181724.35e74b4e8c3e0b1790a78551ec92314c. 2023-07-13 03:16:23,031 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 35e74b4e8c3e0b1790a78551ec92314c: 2023-07-13 03:16:23,031 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=73, ppid=69, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=01ed4b356b529c42f25f4ba67ec393be, UNASSIGN in 200 msec 2023-07-13 03:16:23,032 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 35e74b4e8c3e0b1790a78551ec92314c 2023-07-13 03:16:23,032 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=70 updating hbase:meta row=35e74b4e8c3e0b1790a78551ec92314c, regionState=CLOSED 2023-07-13 03:16:23,033 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689218181724.35e74b4e8c3e0b1790a78551ec92314c.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689218183032"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689218183032"}]},"ts":"1689218183032"} 2023-07-13 03:16:23,035 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=79, resume processing ppid=70 2023-07-13 03:16:23,035 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=79, ppid=70, state=SUCCESS; CloseRegionProcedure 35e74b4e8c3e0b1790a78551ec92314c, server=jenkins-hbase20.apache.org,37181,1689218172183 in 182 msec 2023-07-13 03:16:23,037 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=70, resume processing ppid=69 2023-07-13 03:16:23,037 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=70, ppid=69, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=35e74b4e8c3e0b1790a78551ec92314c, UNASSIGN in 206 msec 2023-07-13 03:16:23,037 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689218183037"}]},"ts":"1689218183037"} 2023-07-13 03:16:23,039 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLED in hbase:meta 2023-07-13 03:16:23,040 INFO [PEWorker-2] procedure.DisableTableProcedure(305): Set Group_testTableMoveTruncateAndDrop to state=DISABLED 2023-07-13 03:16:23,042 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=69, state=SUCCESS; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop in 233 msec 2023-07-13 03:16:23,126 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(1230): Checking to see if procedure is done pid=69 2023-07-13 03:16:23,127 INFO [Listener at localhost.localdomain/36261] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 69 completed 2023-07-13 03:16:23,137 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.HMaster$5(2228): Client=jenkins//148.251.75.209 delete Group_testTableMoveTruncateAndDrop 2023-07-13 03:16:23,143 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] procedure2.ProcedureExecutor(1029): Stored pid=80, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-13 03:16:23,146 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=80, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-13 03:16:23,146 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testTableMoveTruncateAndDrop' from rsgroup 'Group_testTableMoveTruncateAndDrop_1739069322' 2023-07-13 03:16:23,147 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=80, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-13 03:16:23,149 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:23,150 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1739069322 2023-07-13 03:16:23,150 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 03:16:23,150 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 03:16:23,159 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(1230): Checking to see if procedure is done pid=80 2023-07-13 03:16:23,161 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testTableMoveTruncateAndDrop/35e74b4e8c3e0b1790a78551ec92314c 2023-07-13 03:16:23,161 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8f281f2d9b408fa8823ecad1982f2904 2023-07-13 03:16:23,161 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testTableMoveTruncateAndDrop/01ed4b356b529c42f25f4ba67ec393be 2023-07-13 03:16:23,161 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7249b32725a4e7fd2b734510957ddb0a 2023-07-13 03:16:23,161 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f5c4e050bfbe648d78474a0e89eaae95 2023-07-13 03:16:23,164 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testTableMoveTruncateAndDrop/35e74b4e8c3e0b1790a78551ec92314c/f, FileablePath, hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testTableMoveTruncateAndDrop/35e74b4e8c3e0b1790a78551ec92314c/recovered.edits] 2023-07-13 03:16:23,164 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testTableMoveTruncateAndDrop/01ed4b356b529c42f25f4ba67ec393be/f, FileablePath, hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testTableMoveTruncateAndDrop/01ed4b356b529c42f25f4ba67ec393be/recovered.edits] 2023-07-13 03:16:23,164 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f5c4e050bfbe648d78474a0e89eaae95/f, FileablePath, hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f5c4e050bfbe648d78474a0e89eaae95/recovered.edits] 2023-07-13 03:16:23,164 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7249b32725a4e7fd2b734510957ddb0a/f, FileablePath, hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7249b32725a4e7fd2b734510957ddb0a/recovered.edits] 2023-07-13 03:16:23,164 DEBUG [HFileArchiver-7] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8f281f2d9b408fa8823ecad1982f2904/f, FileablePath, hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8f281f2d9b408fa8823ecad1982f2904/recovered.edits] 2023-07-13 03:16:23,172 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testTableMoveTruncateAndDrop/01ed4b356b529c42f25f4ba67ec393be/recovered.edits/4.seqid to hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/archive/data/default/Group_testTableMoveTruncateAndDrop/01ed4b356b529c42f25f4ba67ec393be/recovered.edits/4.seqid 2023-07-13 03:16:23,172 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f5c4e050bfbe648d78474a0e89eaae95/recovered.edits/4.seqid to hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/archive/data/default/Group_testTableMoveTruncateAndDrop/f5c4e050bfbe648d78474a0e89eaae95/recovered.edits/4.seqid 2023-07-13 03:16:23,172 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testTableMoveTruncateAndDrop/35e74b4e8c3e0b1790a78551ec92314c/recovered.edits/4.seqid to hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/archive/data/default/Group_testTableMoveTruncateAndDrop/35e74b4e8c3e0b1790a78551ec92314c/recovered.edits/4.seqid 2023-07-13 03:16:23,172 DEBUG [HFileArchiver-7] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8f281f2d9b408fa8823ecad1982f2904/recovered.edits/4.seqid to hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/archive/data/default/Group_testTableMoveTruncateAndDrop/8f281f2d9b408fa8823ecad1982f2904/recovered.edits/4.seqid 2023-07-13 03:16:23,172 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7249b32725a4e7fd2b734510957ddb0a/recovered.edits/4.seqid to hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/archive/data/default/Group_testTableMoveTruncateAndDrop/7249b32725a4e7fd2b734510957ddb0a/recovered.edits/4.seqid 2023-07-13 03:16:23,173 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testTableMoveTruncateAndDrop/01ed4b356b529c42f25f4ba67ec393be 2023-07-13 03:16:23,173 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f5c4e050bfbe648d78474a0e89eaae95 2023-07-13 03:16:23,173 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testTableMoveTruncateAndDrop/35e74b4e8c3e0b1790a78551ec92314c 2023-07-13 03:16:23,173 DEBUG [HFileArchiver-7] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8f281f2d9b408fa8823ecad1982f2904 2023-07-13 03:16:23,173 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7249b32725a4e7fd2b734510957ddb0a 2023-07-13 03:16:23,174 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-13 03:16:23,177 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=80, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-13 03:16:23,183 WARN [PEWorker-3] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testTableMoveTruncateAndDrop from hbase:meta 2023-07-13 03:16:23,186 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(421): Removing 'Group_testTableMoveTruncateAndDrop' descriptor. 2023-07-13 03:16:23,187 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=80, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-13 03:16:23,188 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(411): Removing 'Group_testTableMoveTruncateAndDrop' from region states. 2023-07-13 03:16:23,188 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,,1689218181724.35e74b4e8c3e0b1790a78551ec92314c.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689218183188"}]},"ts":"9223372036854775807"} 2023-07-13 03:16:23,188 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689218181724.7249b32725a4e7fd2b734510957ddb0a.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689218183188"}]},"ts":"9223372036854775807"} 2023-07-13 03:16:23,188 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689218181724.f5c4e050bfbe648d78474a0e89eaae95.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689218183188"}]},"ts":"9223372036854775807"} 2023-07-13 03:16:23,188 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689218181724.01ed4b356b529c42f25f4ba67ec393be.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689218183188"}]},"ts":"9223372036854775807"} 2023-07-13 03:16:23,188 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689218181724.8f281f2d9b408fa8823ecad1982f2904.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689218183188"}]},"ts":"9223372036854775807"} 2023-07-13 03:16:23,190 INFO [PEWorker-3] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-13 03:16:23,191 DEBUG [PEWorker-3] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 35e74b4e8c3e0b1790a78551ec92314c, NAME => 'Group_testTableMoveTruncateAndDrop,,1689218181724.35e74b4e8c3e0b1790a78551ec92314c.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => 7249b32725a4e7fd2b734510957ddb0a, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689218181724.7249b32725a4e7fd2b734510957ddb0a.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => f5c4e050bfbe648d78474a0e89eaae95, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689218181724.f5c4e050bfbe648d78474a0e89eaae95.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => 01ed4b356b529c42f25f4ba67ec393be, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689218181724.01ed4b356b529c42f25f4ba67ec393be.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => 8f281f2d9b408fa8823ecad1982f2904, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689218181724.8f281f2d9b408fa8823ecad1982f2904.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-13 03:16:23,191 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(415): Marking 'Group_testTableMoveTruncateAndDrop' as deleted. 2023-07-13 03:16:23,191 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689218183191"}]},"ts":"9223372036854775807"} 2023-07-13 03:16:23,193 INFO [PEWorker-3] hbase.MetaTableAccessor(1658): Deleted table Group_testTableMoveTruncateAndDrop state from META 2023-07-13 03:16:23,194 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(130): Finished pid=80, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-13 03:16:23,198 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=80, state=SUCCESS; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop in 57 msec 2023-07-13 03:16:23,261 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(1230): Checking to see if procedure is done pid=80 2023-07-13 03:16:23,261 INFO [Listener at localhost.localdomain/36261] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 80 completed 2023-07-13 03:16:23,263 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_1739069322 2023-07-13 03:16:23,264 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 03:16:23,270 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=37181] ipc.CallRunner(144): callId: 165 service: ClientService methodName: Scan size: 147 connection: 148.251.75.209:51378 deadline: 1689218243270, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase20.apache.org port=44325 startCode=1689218176275. As of locationSeqNum=6. 2023-07-13 03:16:23,377 DEBUG [hconnection-0x4b805d89-shared-pool-10] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-13 03:16:23,380 INFO [RS-EventLoopGroup-7-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:35522, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-13 03:16:23,391 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:23,391 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:23,393 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-13 03:16:23,393 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 03:16:23,393 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-13 03:16:23,394 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:32993, jenkins-hbase20.apache.org:37181] to rsgroup default 2023-07-13 03:16:23,397 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:23,398 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1739069322 2023-07-13 03:16:23,398 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 03:16:23,399 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 03:16:23,402 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testTableMoveTruncateAndDrop_1739069322, current retry=0 2023-07-13 03:16:23,402 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase20.apache.org,32993,1689218172776, jenkins-hbase20.apache.org,37181,1689218172183] are moved back to Group_testTableMoveTruncateAndDrop_1739069322 2023-07-13 03:16:23,402 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testTableMoveTruncateAndDrop_1739069322 => default 2023-07-13 03:16:23,402 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-13 03:16:23,410 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup Group_testTableMoveTruncateAndDrop_1739069322 2023-07-13 03:16:23,420 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:23,421 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 03:16:23,421 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-13 03:16:23,423 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 03:16:23,424 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-13 03:16:23,424 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 03:16:23,424 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-13 03:16:23,426 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-13 03:16:23,426 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-13 03:16:23,427 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup master 2023-07-13 03:16:23,431 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:23,431 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-13 03:16:23,432 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 03:16:23,439 INFO [Listener at localhost.localdomain/36261] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-13 03:16:23,440 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-13 03:16:23,443 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:23,444 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 03:16:23,455 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 03:16:23,456 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 03:16:23,461 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:23,461 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:23,466 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:33491] to rsgroup master 2023-07-13 03:16:23,466 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33491 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 03:16:23,467 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] ipc.CallRunner(144): callId: 147 service: MasterService methodName: ExecMasterService size: 120 connection: 148.251.75.209:45566 deadline: 1689219383466, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33491 is either offline or it does not exist. 2023-07-13 03:16:23,467 WARN [Listener at localhost.localdomain/36261] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33491 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33491 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-13 03:16:23,469 INFO [Listener at localhost.localdomain/36261] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 03:16:23,471 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:23,471 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:23,471 INFO [Listener at localhost.localdomain/36261] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:32993, jenkins-hbase20.apache.org:37181, jenkins-hbase20.apache.org:44171, jenkins-hbase20.apache.org:44325], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-13 03:16:23,472 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-13 03:16:23,472 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 03:16:23,509 INFO [Listener at localhost.localdomain/36261] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testTableMoveTruncateAndDrop Thread=502 (was 416) Potentially hanging thread: qtp562035959-634 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2cd1b0c2-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-7 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase20:44325-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp562035959-635 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RSProcedureDispatcher-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp562035959-632 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_META-regionserver/jenkins-hbase20:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x4b805d89-shared-pool-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44325 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-28934839-148.251.75.209-1689218166310:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44325 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: HFileArchiver-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-5 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp562035959-633 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692-prefix:jenkins-hbase20.apache.org,44171,1689218172445.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-3 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:56998@0x485d8eb9-SendThread(127.0.0.1:56998) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:56998@0x485d8eb9-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1725869577_17 at /127.0.0.1:53860 [Receiving block BP-28934839-148.251.75.209-1689218166310:blk_1073741844_1020] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x4b805d89-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.3@localhost.localdomain:34135 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=44325 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Timer for 'HBase' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RS-EventLoopGroup-4-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-8 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase20:44325Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=44325 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=44325 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1866409043_17 at /127.0.0.1:50438 [Receiving block BP-28934839-148.251.75.209-1689218166310:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=44325 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1866409043_17 at /127.0.0.1:53800 [Receiving block BP-28934839-148.251.75.209-1689218166310:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:56998@0x485d8eb9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1482018670.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp562035959-631 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-219071151_17 at /127.0.0.1:50490 [Waiting for operation #11] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1725869577_17 at /127.0.0.1:50468 [Receiving block BP-28934839-148.251.75.209-1689218166310:blk_1073741844_1020] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (2049234709) connection to localhost.localdomain/127.0.0.1:34135 from jenkins.hfs.3 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-4 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=44325 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp562035959-636 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x4b805d89-shared-pool-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1725869577_17 at /127.0.0.1:46020 [Receiving block BP-28934839-148.251.75.209-1689218166310:blk_1073741844_1020] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-28934839-148.251.75.209-1689218166310:blk_1073741844_1020, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x4b805d89-shared-pool-12 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x4b805d89-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692-prefix:jenkins-hbase20.apache.org,44325,1689218176275 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=44325 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: HFileArchiver-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=44325 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS:3;jenkins-hbase20:44325 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-28934839-148.251.75.209-1689218166310:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-28934839-148.251.75.209-1689218166310:blk_1073741844_1020, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1866409043_17 at /127.0.0.1:45984 [Receiving block BP-28934839-148.251.75.209-1689218166310:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-2e50cce-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp562035959-629 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/982436088.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-28934839-148.251.75.209-1689218166310:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp562035959-630-acceptor-0@264b69c2-ServerConnector@171b7e62{HTTP/1.1, (http/1.1)}{0.0.0.0:46337} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=44325 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-4-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-28934839-148.251.75.209-1689218166310:blk_1073741844_1020, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2cd1b0c2-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-9 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1148779107_17 at /127.0.0.1:36704 [Waiting for operation #9] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-6 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x4b805d89-shared-pool-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=811 (was 666) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=509 (was 540), ProcessCount=170 (was 173), AvailableMemoryMB=3678 (was 3926) 2023-07-13 03:16:23,510 WARN [Listener at localhost.localdomain/36261] hbase.ResourceChecker(130): Thread=502 is superior to 500 2023-07-13 03:16:23,528 INFO [Listener at localhost.localdomain/36261] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testValidGroupNames Thread=502, OpenFileDescriptor=811, MaxFileDescriptor=60000, SystemLoadAverage=509, ProcessCount=170, AvailableMemoryMB=3672 2023-07-13 03:16:23,529 WARN [Listener at localhost.localdomain/36261] hbase.ResourceChecker(130): Thread=502 is superior to 500 2023-07-13 03:16:23,530 INFO [Listener at localhost.localdomain/36261] rsgroup.TestRSGroupsBase(132): testValidGroupNames 2023-07-13 03:16:23,536 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:23,536 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:23,537 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-13 03:16:23,537 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 03:16:23,537 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-13 03:16:23,539 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-13 03:16:23,539 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-13 03:16:23,540 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup master 2023-07-13 03:16:23,547 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:23,547 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-13 03:16:23,549 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 03:16:23,554 INFO [Listener at localhost.localdomain/36261] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-13 03:16:23,556 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-13 03:16:23,559 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:23,559 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 03:16:23,563 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 03:16:23,565 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 03:16:23,569 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:23,570 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:23,577 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:33491] to rsgroup master 2023-07-13 03:16:23,578 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33491 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 03:16:23,578 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] ipc.CallRunner(144): callId: 175 service: MasterService methodName: ExecMasterService size: 120 connection: 148.251.75.209:45566 deadline: 1689219383577, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33491 is either offline or it does not exist. 2023-07-13 03:16:23,578 WARN [Listener at localhost.localdomain/36261] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33491 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33491 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-13 03:16:23,580 INFO [Listener at localhost.localdomain/36261] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 03:16:23,583 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:23,584 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:23,584 INFO [Listener at localhost.localdomain/36261] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:32993, jenkins-hbase20.apache.org:37181, jenkins-hbase20.apache.org:44171, jenkins-hbase20.apache.org:44325], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-13 03:16:23,585 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-13 03:16:23,585 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 03:16:23,586 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup foo* 2023-07-13 03:16:23,587 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 03:16:23,587 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] ipc.CallRunner(144): callId: 181 service: MasterService methodName: ExecMasterService size: 83 connection: 148.251.75.209:45566 deadline: 1689219383586, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-13 03:16:23,588 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup foo@ 2023-07-13 03:16:23,588 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 03:16:23,588 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] ipc.CallRunner(144): callId: 183 service: MasterService methodName: ExecMasterService size: 83 connection: 148.251.75.209:45566 deadline: 1689219383588, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-13 03:16:23,589 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup - 2023-07-13 03:16:23,589 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 03:16:23,590 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] ipc.CallRunner(144): callId: 185 service: MasterService methodName: ExecMasterService size: 80 connection: 148.251.75.209:45566 deadline: 1689219383589, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-13 03:16:23,591 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup foo_123 2023-07-13 03:16:23,593 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/foo_123 2023-07-13 03:16:23,594 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:23,595 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 03:16:23,598 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 03:16:23,607 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 03:16:23,614 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:23,615 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:23,622 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:23,622 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:23,623 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-13 03:16:23,623 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 03:16:23,623 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-13 03:16:23,628 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-13 03:16:23,629 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-13 03:16:23,631 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup foo_123 2023-07-13 03:16:23,637 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:23,638 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 03:16:23,638 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-13 03:16:23,640 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 03:16:23,642 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-13 03:16:23,642 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 03:16:23,643 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-13 03:16:23,644 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-13 03:16:23,645 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-13 03:16:23,646 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup master 2023-07-13 03:16:23,651 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:23,652 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-13 03:16:23,655 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 03:16:23,659 INFO [Listener at localhost.localdomain/36261] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-13 03:16:23,659 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-13 03:16:23,662 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:23,662 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 03:16:23,664 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 03:16:23,665 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 03:16:23,670 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:23,670 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:23,674 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:33491] to rsgroup master 2023-07-13 03:16:23,674 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33491 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 03:16:23,674 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] ipc.CallRunner(144): callId: 219 service: MasterService methodName: ExecMasterService size: 120 connection: 148.251.75.209:45566 deadline: 1689219383673, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33491 is either offline or it does not exist. 2023-07-13 03:16:23,674 WARN [Listener at localhost.localdomain/36261] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33491 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33491 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-13 03:16:23,676 INFO [Listener at localhost.localdomain/36261] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 03:16:23,677 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:23,677 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:23,678 INFO [Listener at localhost.localdomain/36261] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:32993, jenkins-hbase20.apache.org:37181, jenkins-hbase20.apache.org:44171, jenkins-hbase20.apache.org:44325], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-13 03:16:23,679 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-13 03:16:23,679 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 03:16:23,698 INFO [Listener at localhost.localdomain/36261] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testValidGroupNames Thread=507 (was 502) Potentially hanging thread: hconnection-0x2cd1b0c2-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2cd1b0c2-shared-pool-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9a468761-9605-9fc9-5826-02909870e5fb/cluster_5365af11-0016-b950-934d-d6cdde7e87b7/dfs/data/data1/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9a468761-9605-9fc9-5826-02909870e5fb/cluster_5365af11-0016-b950-934d-d6cdde7e87b7/dfs/data/data2/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2cd1b0c2-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=811 (was 811), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=509 (was 509), ProcessCount=170 (was 170), AvailableMemoryMB=3652 (was 3672) 2023-07-13 03:16:23,698 WARN [Listener at localhost.localdomain/36261] hbase.ResourceChecker(130): Thread=507 is superior to 500 2023-07-13 03:16:23,718 INFO [Listener at localhost.localdomain/36261] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testFailRemoveGroup Thread=507, OpenFileDescriptor=811, MaxFileDescriptor=60000, SystemLoadAverage=509, ProcessCount=170, AvailableMemoryMB=3651 2023-07-13 03:16:23,718 WARN [Listener at localhost.localdomain/36261] hbase.ResourceChecker(130): Thread=507 is superior to 500 2023-07-13 03:16:23,718 INFO [Listener at localhost.localdomain/36261] rsgroup.TestRSGroupsBase(132): testFailRemoveGroup 2023-07-13 03:16:23,726 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:23,726 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:23,727 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-13 03:16:23,727 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 03:16:23,727 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-13 03:16:23,729 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-13 03:16:23,729 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-13 03:16:23,730 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup master 2023-07-13 03:16:23,734 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:23,735 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-13 03:16:23,736 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 03:16:23,740 INFO [Listener at localhost.localdomain/36261] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-13 03:16:23,740 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-13 03:16:23,743 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:23,743 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 03:16:23,746 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 03:16:23,748 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 03:16:23,752 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:23,752 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:23,755 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:33491] to rsgroup master 2023-07-13 03:16:23,755 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33491 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 03:16:23,755 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] ipc.CallRunner(144): callId: 247 service: MasterService methodName: ExecMasterService size: 120 connection: 148.251.75.209:45566 deadline: 1689219383755, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33491 is either offline or it does not exist. 2023-07-13 03:16:23,756 WARN [Listener at localhost.localdomain/36261] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33491 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33491 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-13 03:16:23,758 INFO [Listener at localhost.localdomain/36261] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 03:16:23,759 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:23,759 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:23,759 INFO [Listener at localhost.localdomain/36261] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:32993, jenkins-hbase20.apache.org:37181, jenkins-hbase20.apache.org:44171, jenkins-hbase20.apache.org:44325], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-13 03:16:23,760 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-13 03:16:23,760 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 03:16:23,762 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:23,762 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:23,763 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-13 03:16:23,763 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 03:16:23,764 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup bar 2023-07-13 03:16:23,767 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:23,767 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-13 03:16:23,768 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 03:16:23,769 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 03:16:23,770 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 03:16:23,773 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:23,774 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:23,776 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:32993, jenkins-hbase20.apache.org:37181, jenkins-hbase20.apache.org:44171] to rsgroup bar 2023-07-13 03:16:23,779 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:23,780 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-13 03:16:23,780 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 03:16:23,783 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 03:16:23,784 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(238): Moving server region 7c4e74675a07c3fb9472d5b7eb467f88, which do not belong to RSGroup bar 2023-07-13 03:16:23,786 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] procedure2.ProcedureExecutor(1029): Stored pid=81, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=7c4e74675a07c3fb9472d5b7eb467f88, REOPEN/MOVE 2023-07-13 03:16:23,786 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(238): Moving server region 1588230740, which do not belong to RSGroup bar 2023-07-13 03:16:23,787 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=81, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=7c4e74675a07c3fb9472d5b7eb467f88, REOPEN/MOVE 2023-07-13 03:16:23,788 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] procedure2.ProcedureExecutor(1029): Stored pid=82, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-13 03:16:23,788 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=81 updating hbase:meta row=7c4e74675a07c3fb9472d5b7eb467f88, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,44171,1689218172445 2023-07-13 03:16:23,790 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689218175378.7c4e74675a07c3fb9472d5b7eb467f88.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689218183788"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689218183788"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689218183788"}]},"ts":"1689218183788"} 2023-07-13 03:16:23,790 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=82, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-13 03:16:23,788 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(286): Moving 2 region(s) to group default, current retry=0 2023-07-13 03:16:23,792 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,44171,1689218172445, state=CLOSING 2023-07-13 03:16:23,793 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=83, ppid=81, state=RUNNABLE; CloseRegionProcedure 7c4e74675a07c3fb9472d5b7eb467f88, server=jenkins-hbase20.apache.org,44171,1689218172445}] 2023-07-13 03:16:23,796 DEBUG [Listener at localhost.localdomain/36261-EventThread] zookeeper.ZKWatcher(600): master:33491-0x1008454350d0000, quorum=127.0.0.1:56998, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-13 03:16:23,799 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=84, ppid=82, state=RUNNABLE; CloseRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,44171,1689218172445}] 2023-07-13 03:16:23,799 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-13 03:16:23,946 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 7c4e74675a07c3fb9472d5b7eb467f88 2023-07-13 03:16:23,947 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 1588230740 2023-07-13 03:16:23,949 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 7c4e74675a07c3fb9472d5b7eb467f88, disabling compactions & flushes 2023-07-13 03:16:23,955 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689218175378.7c4e74675a07c3fb9472d5b7eb467f88. 2023-07-13 03:16:23,955 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689218175378.7c4e74675a07c3fb9472d5b7eb467f88. 2023-07-13 03:16:23,956 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689218175378.7c4e74675a07c3fb9472d5b7eb467f88. after waiting 0 ms 2023-07-13 03:16:23,956 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689218175378.7c4e74675a07c3fb9472d5b7eb467f88. 2023-07-13 03:16:23,956 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-13 03:16:23,956 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing 7c4e74675a07c3fb9472d5b7eb467f88 1/1 column families, dataSize=5.06 KB heapSize=8.50 KB 2023-07-13 03:16:23,957 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-13 03:16:23,957 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-13 03:16:23,957 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-13 03:16:23,957 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-13 03:16:23,957 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=40.32 KB heapSize=62.05 KB 2023-07-13 03:16:23,980 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=37.43 KB at sequenceid=105 (bloomFilter=false), to=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/meta/1588230740/.tmp/info/842db9af30d84b6ab62f648442ca88c4 2023-07-13 03:16:23,980 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=5.06 KB at sequenceid=32 (bloomFilter=true), to=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/rsgroup/7c4e74675a07c3fb9472d5b7eb467f88/.tmp/m/b74b096c110a440a98fe44c3dad0d167 2023-07-13 03:16:23,987 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 842db9af30d84b6ab62f648442ca88c4 2023-07-13 03:16:23,989 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for b74b096c110a440a98fe44c3dad0d167 2023-07-13 03:16:23,990 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/rsgroup/7c4e74675a07c3fb9472d5b7eb467f88/.tmp/m/b74b096c110a440a98fe44c3dad0d167 as hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/rsgroup/7c4e74675a07c3fb9472d5b7eb467f88/m/b74b096c110a440a98fe44c3dad0d167 2023-07-13 03:16:23,998 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for b74b096c110a440a98fe44c3dad0d167 2023-07-13 03:16:23,998 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/rsgroup/7c4e74675a07c3fb9472d5b7eb467f88/m/b74b096c110a440a98fe44c3dad0d167, entries=9, sequenceid=32, filesize=5.5 K 2023-07-13 03:16:24,003 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~5.06 KB/5177, heapSize ~8.48 KB/8688, currentSize=0 B/0 for 7c4e74675a07c3fb9472d5b7eb467f88 in 47ms, sequenceid=32, compaction requested=false 2023-07-13 03:16:24,015 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.15 KB at sequenceid=105 (bloomFilter=false), to=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/meta/1588230740/.tmp/rep_barrier/d1d8306bee22480b8c9d58b8f4c0f30c 2023-07-13 03:16:24,016 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/rsgroup/7c4e74675a07c3fb9472d5b7eb467f88/recovered.edits/35.seqid, newMaxSeqId=35, maxSeqId=12 2023-07-13 03:16:24,017 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-13 03:16:24,018 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689218175378.7c4e74675a07c3fb9472d5b7eb467f88. 2023-07-13 03:16:24,018 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 7c4e74675a07c3fb9472d5b7eb467f88: 2023-07-13 03:16:24,018 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(3513): Adding 7c4e74675a07c3fb9472d5b7eb467f88 move to jenkins-hbase20.apache.org,44325,1689218176275 record at close sequenceid=32 2023-07-13 03:16:24,021 DEBUG [PEWorker-3] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=83, ppid=81, state=RUNNABLE; CloseRegionProcedure 7c4e74675a07c3fb9472d5b7eb467f88, server=jenkins-hbase20.apache.org,44171,1689218172445 2023-07-13 03:16:24,021 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 7c4e74675a07c3fb9472d5b7eb467f88 2023-07-13 03:16:24,025 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for d1d8306bee22480b8c9d58b8f4c0f30c 2023-07-13 03:16:24,041 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.73 KB at sequenceid=105 (bloomFilter=false), to=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/meta/1588230740/.tmp/table/95e303999030447c95eebddf0ed4874a 2023-07-13 03:16:24,048 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 95e303999030447c95eebddf0ed4874a 2023-07-13 03:16:24,049 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/meta/1588230740/.tmp/info/842db9af30d84b6ab62f648442ca88c4 as hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/meta/1588230740/info/842db9af30d84b6ab62f648442ca88c4 2023-07-13 03:16:24,057 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 842db9af30d84b6ab62f648442ca88c4 2023-07-13 03:16:24,057 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/meta/1588230740/info/842db9af30d84b6ab62f648442ca88c4, entries=41, sequenceid=105, filesize=9.6 K 2023-07-13 03:16:24,058 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/meta/1588230740/.tmp/rep_barrier/d1d8306bee22480b8c9d58b8f4c0f30c as hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/meta/1588230740/rep_barrier/d1d8306bee22480b8c9d58b8f4c0f30c 2023-07-13 03:16:24,065 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for d1d8306bee22480b8c9d58b8f4c0f30c 2023-07-13 03:16:24,065 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/meta/1588230740/rep_barrier/d1d8306bee22480b8c9d58b8f4c0f30c, entries=10, sequenceid=105, filesize=6.1 K 2023-07-13 03:16:24,066 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/meta/1588230740/.tmp/table/95e303999030447c95eebddf0ed4874a as hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/meta/1588230740/table/95e303999030447c95eebddf0ed4874a 2023-07-13 03:16:24,074 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 95e303999030447c95eebddf0ed4874a 2023-07-13 03:16:24,074 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/meta/1588230740/table/95e303999030447c95eebddf0ed4874a, entries=11, sequenceid=105, filesize=6.0 K 2023-07-13 03:16:24,075 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~40.32 KB/41283, heapSize ~62 KB/63488, currentSize=0 B/0 for 1588230740 in 118ms, sequenceid=105, compaction requested=false 2023-07-13 03:16:24,088 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/meta/1588230740/recovered.edits/108.seqid, newMaxSeqId=108, maxSeqId=19 2023-07-13 03:16:24,089 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-13 03:16:24,090 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-13 03:16:24,090 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-13 03:16:24,090 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(3513): Adding 1588230740 move to jenkins-hbase20.apache.org,44325,1689218176275 record at close sequenceid=105 2023-07-13 03:16:24,092 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 1588230740 2023-07-13 03:16:24,092 WARN [PEWorker-4] zookeeper.MetaTableLocator(225): Tried to set null ServerName in hbase:meta; skipping -- ServerName required 2023-07-13 03:16:24,094 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=84, resume processing ppid=82 2023-07-13 03:16:24,095 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=84, ppid=82, state=SUCCESS; CloseRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,44171,1689218172445 in 296 msec 2023-07-13 03:16:24,095 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=82, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase20.apache.org,44325,1689218176275; forceNewPlan=false, retain=false 2023-07-13 03:16:24,246 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,44325,1689218176275, state=OPENING 2023-07-13 03:16:24,247 DEBUG [Listener at localhost.localdomain/36261-EventThread] zookeeper.ZKWatcher(600): master:33491-0x1008454350d0000, quorum=127.0.0.1:56998, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-13 03:16:24,247 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=85, ppid=82, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,44325,1689218176275}] 2023-07-13 03:16:24,247 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-13 03:16:24,404 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-13 03:16:24,404 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-13 03:16:24,407 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C44325%2C1689218176275.meta, suffix=.meta, logDir=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/WALs/jenkins-hbase20.apache.org,44325,1689218176275, archiveDir=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/oldWALs, maxLogs=32 2023-07-13 03:16:24,426 DEBUG [RS-EventLoopGroup-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43409,DS-f2641e55-6772-43f9-9084-b6bc41af5cda,DISK] 2023-07-13 03:16:24,426 DEBUG [RS-EventLoopGroup-7-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43963,DS-72306ca7-9fea-43e7-ac6c-a3e6f88d5ecf,DISK] 2023-07-13 03:16:24,426 DEBUG [RS-EventLoopGroup-7-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37299,DS-7325d1d6-f32c-4e9f-9d47-b89ecc0dcb96,DISK] 2023-07-13 03:16:24,428 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/WALs/jenkins-hbase20.apache.org,44325,1689218176275/jenkins-hbase20.apache.org%2C44325%2C1689218176275.meta.1689218184408.meta 2023-07-13 03:16:24,428 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43409,DS-f2641e55-6772-43f9-9084-b6bc41af5cda,DISK], DatanodeInfoWithStorage[127.0.0.1:37299,DS-7325d1d6-f32c-4e9f-9d47-b89ecc0dcb96,DISK], DatanodeInfoWithStorage[127.0.0.1:43963,DS-72306ca7-9fea-43e7-ac6c-a3e6f88d5ecf,DISK]] 2023-07-13 03:16:24,428 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-13 03:16:24,428 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-13 03:16:24,428 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-13 03:16:24,429 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-13 03:16:24,429 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-13 03:16:24,429 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 03:16:24,429 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-13 03:16:24,429 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-13 03:16:24,431 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-13 03:16:24,432 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/meta/1588230740/info 2023-07-13 03:16:24,432 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/meta/1588230740/info 2023-07-13 03:16:24,432 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-13 03:16:24,443 INFO [StoreFileOpener-info-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 842db9af30d84b6ab62f648442ca88c4 2023-07-13 03:16:24,443 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/meta/1588230740/info/842db9af30d84b6ab62f648442ca88c4 2023-07-13 03:16:24,449 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/meta/1588230740/info/bcaca02537e54611aed1a2e9a228755c 2023-07-13 03:16:24,450 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 03:16:24,450 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-13 03:16:24,451 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/meta/1588230740/rep_barrier 2023-07-13 03:16:24,451 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/meta/1588230740/rep_barrier 2023-07-13 03:16:24,451 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-13 03:16:24,458 INFO [StoreFileOpener-rep_barrier-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for d1d8306bee22480b8c9d58b8f4c0f30c 2023-07-13 03:16:24,458 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/meta/1588230740/rep_barrier/d1d8306bee22480b8c9d58b8f4c0f30c 2023-07-13 03:16:24,458 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 03:16:24,459 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-13 03:16:24,460 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/meta/1588230740/table 2023-07-13 03:16:24,460 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/meta/1588230740/table 2023-07-13 03:16:24,460 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-13 03:16:24,468 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/meta/1588230740/table/804791cb25304cdcb162f6411c6bacb2 2023-07-13 03:16:24,473 INFO [StoreFileOpener-table-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 95e303999030447c95eebddf0ed4874a 2023-07-13 03:16:24,474 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/meta/1588230740/table/95e303999030447c95eebddf0ed4874a 2023-07-13 03:16:24,474 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 03:16:24,475 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/meta/1588230740 2023-07-13 03:16:24,476 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/meta/1588230740 2023-07-13 03:16:24,479 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-13 03:16:24,480 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-13 03:16:24,481 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=109; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11988076480, jitterRate=0.1164766252040863}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-13 03:16:24,481 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-13 03:16:24,482 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=85, masterSystemTime=1689218184399 2023-07-13 03:16:24,483 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-13 03:16:24,483 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-13 03:16:24,484 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,44325,1689218176275, state=OPEN 2023-07-13 03:16:24,485 DEBUG [Listener at localhost.localdomain/36261-EventThread] zookeeper.ZKWatcher(600): master:33491-0x1008454350d0000, quorum=127.0.0.1:56998, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-13 03:16:24,485 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-13 03:16:24,485 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=81 updating hbase:meta row=7c4e74675a07c3fb9472d5b7eb467f88, regionState=CLOSED 2023-07-13 03:16:24,486 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689218175378.7c4e74675a07c3fb9472d5b7eb467f88.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689218184485"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689218184485"}]},"ts":"1689218184485"} 2023-07-13 03:16:24,486 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=44171] ipc.CallRunner(144): callId: 194 service: ClientService methodName: Mutate size: 214 connection: 148.251.75.209:60510 deadline: 1689218244486, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase20.apache.org port=44325 startCode=1689218176275. As of locationSeqNum=105. 2023-07-13 03:16:24,486 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=85, resume processing ppid=82 2023-07-13 03:16:24,487 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=85, ppid=82, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,44325,1689218176275 in 238 msec 2023-07-13 03:16:24,488 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=82, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE in 700 msec 2023-07-13 03:16:24,592 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=83, resume processing ppid=81 2023-07-13 03:16:24,592 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=83, ppid=81, state=SUCCESS; CloseRegionProcedure 7c4e74675a07c3fb9472d5b7eb467f88, server=jenkins-hbase20.apache.org,44171,1689218172445 in 796 msec 2023-07-13 03:16:24,592 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=81, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=7c4e74675a07c3fb9472d5b7eb467f88, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase20.apache.org,44325,1689218176275; forceNewPlan=false, retain=false 2023-07-13 03:16:24,743 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=81 updating hbase:meta row=7c4e74675a07c3fb9472d5b7eb467f88, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,44325,1689218176275 2023-07-13 03:16:24,743 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689218175378.7c4e74675a07c3fb9472d5b7eb467f88.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689218184743"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689218184743"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689218184743"}]},"ts":"1689218184743"} 2023-07-13 03:16:24,745 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=86, ppid=81, state=RUNNABLE; OpenRegionProcedure 7c4e74675a07c3fb9472d5b7eb467f88, server=jenkins-hbase20.apache.org,44325,1689218176275}] 2023-07-13 03:16:24,791 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] procedure.ProcedureSyncWait(216): waitFor pid=81 2023-07-13 03:16:24,904 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689218175378.7c4e74675a07c3fb9472d5b7eb467f88. 2023-07-13 03:16:24,905 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 7c4e74675a07c3fb9472d5b7eb467f88, NAME => 'hbase:rsgroup,,1689218175378.7c4e74675a07c3fb9472d5b7eb467f88.', STARTKEY => '', ENDKEY => ''} 2023-07-13 03:16:24,905 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-13 03:16:24,905 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689218175378.7c4e74675a07c3fb9472d5b7eb467f88. service=MultiRowMutationService 2023-07-13 03:16:24,905 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-13 03:16:24,905 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 7c4e74675a07c3fb9472d5b7eb467f88 2023-07-13 03:16:24,905 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689218175378.7c4e74675a07c3fb9472d5b7eb467f88.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 03:16:24,906 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 7c4e74675a07c3fb9472d5b7eb467f88 2023-07-13 03:16:24,906 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 7c4e74675a07c3fb9472d5b7eb467f88 2023-07-13 03:16:24,911 INFO [StoreOpener-7c4e74675a07c3fb9472d5b7eb467f88-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 7c4e74675a07c3fb9472d5b7eb467f88 2023-07-13 03:16:24,912 DEBUG [StoreOpener-7c4e74675a07c3fb9472d5b7eb467f88-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/rsgroup/7c4e74675a07c3fb9472d5b7eb467f88/m 2023-07-13 03:16:24,913 DEBUG [StoreOpener-7c4e74675a07c3fb9472d5b7eb467f88-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/rsgroup/7c4e74675a07c3fb9472d5b7eb467f88/m 2023-07-13 03:16:24,913 INFO [StoreOpener-7c4e74675a07c3fb9472d5b7eb467f88-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 7c4e74675a07c3fb9472d5b7eb467f88 columnFamilyName m 2023-07-13 03:16:24,928 DEBUG [StoreOpener-7c4e74675a07c3fb9472d5b7eb467f88-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/rsgroup/7c4e74675a07c3fb9472d5b7eb467f88/m/b15a5711a2e54542bce9c6f3fae93ae6 2023-07-13 03:16:24,937 INFO [StoreFileOpener-m-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for b74b096c110a440a98fe44c3dad0d167 2023-07-13 03:16:24,937 DEBUG [StoreOpener-7c4e74675a07c3fb9472d5b7eb467f88-1] regionserver.HStore(539): loaded hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/rsgroup/7c4e74675a07c3fb9472d5b7eb467f88/m/b74b096c110a440a98fe44c3dad0d167 2023-07-13 03:16:24,937 INFO [StoreOpener-7c4e74675a07c3fb9472d5b7eb467f88-1] regionserver.HStore(310): Store=7c4e74675a07c3fb9472d5b7eb467f88/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 03:16:24,939 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/rsgroup/7c4e74675a07c3fb9472d5b7eb467f88 2023-07-13 03:16:24,940 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/rsgroup/7c4e74675a07c3fb9472d5b7eb467f88 2023-07-13 03:16:24,944 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 7c4e74675a07c3fb9472d5b7eb467f88 2023-07-13 03:16:24,945 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 7c4e74675a07c3fb9472d5b7eb467f88; next sequenceid=36; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@3dc6de7, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 03:16:24,945 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 7c4e74675a07c3fb9472d5b7eb467f88: 2023-07-13 03:16:24,945 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689218175378.7c4e74675a07c3fb9472d5b7eb467f88., pid=86, masterSystemTime=1689218184898 2023-07-13 03:16:24,947 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689218175378.7c4e74675a07c3fb9472d5b7eb467f88. 2023-07-13 03:16:24,947 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689218175378.7c4e74675a07c3fb9472d5b7eb467f88. 2023-07-13 03:16:24,948 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=81 updating hbase:meta row=7c4e74675a07c3fb9472d5b7eb467f88, regionState=OPEN, openSeqNum=36, regionLocation=jenkins-hbase20.apache.org,44325,1689218176275 2023-07-13 03:16:24,948 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689218175378.7c4e74675a07c3fb9472d5b7eb467f88.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689218184947"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689218184947"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689218184947"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689218184947"}]},"ts":"1689218184947"} 2023-07-13 03:16:24,952 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=86, resume processing ppid=81 2023-07-13 03:16:24,952 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=86, ppid=81, state=SUCCESS; OpenRegionProcedure 7c4e74675a07c3fb9472d5b7eb467f88, server=jenkins-hbase20.apache.org,44325,1689218176275 in 205 msec 2023-07-13 03:16:24,955 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=81, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=7c4e74675a07c3fb9472d5b7eb467f88, REOPEN/MOVE in 1.1670 sec 2023-07-13 03:16:25,791 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase20.apache.org,32993,1689218172776, jenkins-hbase20.apache.org,37181,1689218172183, jenkins-hbase20.apache.org,44171,1689218172445] are moved back to default 2023-07-13 03:16:25,791 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(438): Move servers done: default => bar 2023-07-13 03:16:25,791 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-13 03:16:25,793 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=44171] ipc.CallRunner(144): callId: 14 service: ClientService methodName: Scan size: 136 connection: 148.251.75.209:45378 deadline: 1689218245792, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase20.apache.org port=44325 startCode=1689218176275. As of locationSeqNum=32. 2023-07-13 03:16:25,895 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=44171] ipc.CallRunner(144): callId: 15 service: ClientService methodName: Get size: 88 connection: 148.251.75.209:45378 deadline: 1689218245894, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase20.apache.org port=44325 startCode=1689218176275. As of locationSeqNum=105. 2023-07-13 03:16:25,996 DEBUG [hconnection-0x2cd1b0c2-shared-pool-4] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-13 03:16:25,998 INFO [RS-EventLoopGroup-7-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:35528, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-13 03:16:26,000 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-13 03:16:26,020 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:26,021 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:26,024 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=bar 2023-07-13 03:16:26,024 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 03:16:26,026 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.HMaster$4(2112): Client=jenkins//148.251.75.209 create 'Group_testFailRemoveGroup', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-13 03:16:26,028 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] procedure2.ProcedureExecutor(1029): Stored pid=87, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testFailRemoveGroup 2023-07-13 03:16:26,030 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=87, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-13 03:16:26,030 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(700): Client=jenkins//148.251.75.209 procedure request for creating table: namespace: "default" qualifier: "Group_testFailRemoveGroup" procId is: 87 2023-07-13 03:16:26,031 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=44171] ipc.CallRunner(144): callId: 199 service: ClientService methodName: ExecService size: 532 connection: 148.251.75.209:60510 deadline: 1689218246031, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase20.apache.org port=44325 startCode=1689218176275. As of locationSeqNum=32. 2023-07-13 03:16:26,032 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(1230): Checking to see if procedure is done pid=87 2023-07-13 03:16:26,133 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(1230): Checking to see if procedure is done pid=87 2023-07-13 03:16:26,136 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:26,136 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-13 03:16:26,137 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 03:16:26,137 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 03:16:26,143 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=87, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-13 03:16:26,145 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testFailRemoveGroup/f5d94aa765f42c2129da7671f3e5126e 2023-07-13 03:16:26,145 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testFailRemoveGroup/f5d94aa765f42c2129da7671f3e5126e empty. 2023-07-13 03:16:26,146 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testFailRemoveGroup/f5d94aa765f42c2129da7671f3e5126e 2023-07-13 03:16:26,146 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived Group_testFailRemoveGroup regions 2023-07-13 03:16:26,168 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testFailRemoveGroup/.tabledesc/.tableinfo.0000000001 2023-07-13 03:16:26,169 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => f5d94aa765f42c2129da7671f3e5126e, NAME => 'Group_testFailRemoveGroup,,1689218186026.f5d94aa765f42c2129da7671f3e5126e.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='Group_testFailRemoveGroup', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp 2023-07-13 03:16:26,185 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689218186026.f5d94aa765f42c2129da7671f3e5126e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 03:16:26,186 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1604): Closing f5d94aa765f42c2129da7671f3e5126e, disabling compactions & flushes 2023-07-13 03:16:26,186 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689218186026.f5d94aa765f42c2129da7671f3e5126e. 2023-07-13 03:16:26,186 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689218186026.f5d94aa765f42c2129da7671f3e5126e. 2023-07-13 03:16:26,186 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689218186026.f5d94aa765f42c2129da7671f3e5126e. after waiting 0 ms 2023-07-13 03:16:26,186 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689218186026.f5d94aa765f42c2129da7671f3e5126e. 2023-07-13 03:16:26,186 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689218186026.f5d94aa765f42c2129da7671f3e5126e. 2023-07-13 03:16:26,186 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1558): Region close journal for f5d94aa765f42c2129da7671f3e5126e: 2023-07-13 03:16:26,189 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=87, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-13 03:16:26,190 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689218186026.f5d94aa765f42c2129da7671f3e5126e.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689218186190"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689218186190"}]},"ts":"1689218186190"} 2023-07-13 03:16:26,191 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-13 03:16:26,192 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=87, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-13 03:16:26,193 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689218186192"}]},"ts":"1689218186192"} 2023-07-13 03:16:26,194 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=ENABLING in hbase:meta 2023-07-13 03:16:26,197 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=88, ppid=87, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=f5d94aa765f42c2129da7671f3e5126e, ASSIGN}] 2023-07-13 03:16:26,199 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=88, ppid=87, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=f5d94aa765f42c2129da7671f3e5126e, ASSIGN 2023-07-13 03:16:26,200 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=88, ppid=87, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=f5d94aa765f42c2129da7671f3e5126e, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,44325,1689218176275; forceNewPlan=false, retain=false 2023-07-13 03:16:26,335 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(1230): Checking to see if procedure is done pid=87 2023-07-13 03:16:26,351 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=88 updating hbase:meta row=f5d94aa765f42c2129da7671f3e5126e, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,44325,1689218176275 2023-07-13 03:16:26,352 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689218186026.f5d94aa765f42c2129da7671f3e5126e.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689218186351"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689218186351"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689218186351"}]},"ts":"1689218186351"} 2023-07-13 03:16:26,353 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=89, ppid=88, state=RUNNABLE; OpenRegionProcedure f5d94aa765f42c2129da7671f3e5126e, server=jenkins-hbase20.apache.org,44325,1689218176275}] 2023-07-13 03:16:26,509 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689218186026.f5d94aa765f42c2129da7671f3e5126e. 2023-07-13 03:16:26,510 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f5d94aa765f42c2129da7671f3e5126e, NAME => 'Group_testFailRemoveGroup,,1689218186026.f5d94aa765f42c2129da7671f3e5126e.', STARTKEY => '', ENDKEY => ''} 2023-07-13 03:16:26,510 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup f5d94aa765f42c2129da7671f3e5126e 2023-07-13 03:16:26,510 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689218186026.f5d94aa765f42c2129da7671f3e5126e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 03:16:26,510 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for f5d94aa765f42c2129da7671f3e5126e 2023-07-13 03:16:26,510 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for f5d94aa765f42c2129da7671f3e5126e 2023-07-13 03:16:26,512 INFO [StoreOpener-f5d94aa765f42c2129da7671f3e5126e-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region f5d94aa765f42c2129da7671f3e5126e 2023-07-13 03:16:26,514 DEBUG [StoreOpener-f5d94aa765f42c2129da7671f3e5126e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testFailRemoveGroup/f5d94aa765f42c2129da7671f3e5126e/f 2023-07-13 03:16:26,515 DEBUG [StoreOpener-f5d94aa765f42c2129da7671f3e5126e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testFailRemoveGroup/f5d94aa765f42c2129da7671f3e5126e/f 2023-07-13 03:16:26,516 INFO [StoreOpener-f5d94aa765f42c2129da7671f3e5126e-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f5d94aa765f42c2129da7671f3e5126e columnFamilyName f 2023-07-13 03:16:26,517 INFO [StoreOpener-f5d94aa765f42c2129da7671f3e5126e-1] regionserver.HStore(310): Store=f5d94aa765f42c2129da7671f3e5126e/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 03:16:26,518 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testFailRemoveGroup/f5d94aa765f42c2129da7671f3e5126e 2023-07-13 03:16:26,518 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testFailRemoveGroup/f5d94aa765f42c2129da7671f3e5126e 2023-07-13 03:16:26,521 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for f5d94aa765f42c2129da7671f3e5126e 2023-07-13 03:16:26,524 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testFailRemoveGroup/f5d94aa765f42c2129da7671f3e5126e/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 03:16:26,524 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened f5d94aa765f42c2129da7671f3e5126e; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10322855040, jitterRate=-0.038609206676483154}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 03:16:26,525 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for f5d94aa765f42c2129da7671f3e5126e: 2023-07-13 03:16:26,525 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689218186026.f5d94aa765f42c2129da7671f3e5126e., pid=89, masterSystemTime=1689218186505 2023-07-13 03:16:26,527 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689218186026.f5d94aa765f42c2129da7671f3e5126e. 2023-07-13 03:16:26,527 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689218186026.f5d94aa765f42c2129da7671f3e5126e. 2023-07-13 03:16:26,528 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=88 updating hbase:meta row=f5d94aa765f42c2129da7671f3e5126e, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,44325,1689218176275 2023-07-13 03:16:26,528 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689218186026.f5d94aa765f42c2129da7671f3e5126e.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689218186528"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689218186528"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689218186528"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689218186528"}]},"ts":"1689218186528"} 2023-07-13 03:16:26,533 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=89, resume processing ppid=88 2023-07-13 03:16:26,533 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=89, ppid=88, state=SUCCESS; OpenRegionProcedure f5d94aa765f42c2129da7671f3e5126e, server=jenkins-hbase20.apache.org,44325,1689218176275 in 177 msec 2023-07-13 03:16:26,534 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=88, resume processing ppid=87 2023-07-13 03:16:26,535 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=88, ppid=87, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=f5d94aa765f42c2129da7671f3e5126e, ASSIGN in 336 msec 2023-07-13 03:16:26,535 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=87, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-13 03:16:26,535 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689218186535"}]},"ts":"1689218186535"} 2023-07-13 03:16:26,539 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=ENABLED in hbase:meta 2023-07-13 03:16:26,541 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=87, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-13 03:16:26,543 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=87, state=SUCCESS; CreateTableProcedure table=Group_testFailRemoveGroup in 514 msec 2023-07-13 03:16:26,636 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(1230): Checking to see if procedure is done pid=87 2023-07-13 03:16:26,637 INFO [Listener at localhost.localdomain/36261] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testFailRemoveGroup, procId: 87 completed 2023-07-13 03:16:26,637 DEBUG [Listener at localhost.localdomain/36261] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testFailRemoveGroup get assigned. Timeout = 60000ms 2023-07-13 03:16:26,637 INFO [Listener at localhost.localdomain/36261] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 03:16:26,638 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=44171] ipc.CallRunner(144): callId: 276 service: ClientService methodName: Scan size: 96 connection: 148.251.75.209:45392 deadline: 1689218246638, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase20.apache.org port=44325 startCode=1689218176275. As of locationSeqNum=105. 2023-07-13 03:16:26,740 DEBUG [hconnection-0x2b42746c-shared-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-13 03:16:26,741 INFO [RS-EventLoopGroup-7-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:35542, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-13 03:16:26,752 INFO [Listener at localhost.localdomain/36261] hbase.HBaseTestingUtility(3484): All regions for table Group_testFailRemoveGroup assigned to meta. Checking AM states. 2023-07-13 03:16:26,752 INFO [Listener at localhost.localdomain/36261] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 03:16:26,752 INFO [Listener at localhost.localdomain/36261] hbase.HBaseTestingUtility(3504): All regions for table Group_testFailRemoveGroup assigned. 2023-07-13 03:16:26,755 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [Group_testFailRemoveGroup] to rsgroup bar 2023-07-13 03:16:26,757 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:26,757 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-13 03:16:26,758 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 03:16:26,758 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 03:16:26,759 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testFailRemoveGroup to RSGroup bar 2023-07-13 03:16:26,759 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(345): Moving region f5d94aa765f42c2129da7671f3e5126e to RSGroup bar 2023-07-13 03:16:26,759 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-13 03:16:26,760 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-13 03:16:26,760 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-13 03:16:26,760 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-13 03:16:26,760 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-13 03:16:26,760 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-13 03:16:26,761 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] procedure2.ProcedureExecutor(1029): Stored pid=90, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=f5d94aa765f42c2129da7671f3e5126e, REOPEN/MOVE 2023-07-13 03:16:26,761 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group bar, current retry=0 2023-07-13 03:16:26,763 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=90, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=f5d94aa765f42c2129da7671f3e5126e, REOPEN/MOVE 2023-07-13 03:16:26,764 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=90 updating hbase:meta row=f5d94aa765f42c2129da7671f3e5126e, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,44325,1689218176275 2023-07-13 03:16:26,764 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689218186026.f5d94aa765f42c2129da7671f3e5126e.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689218186763"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689218186763"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689218186763"}]},"ts":"1689218186763"} 2023-07-13 03:16:26,765 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=91, ppid=90, state=RUNNABLE; CloseRegionProcedure f5d94aa765f42c2129da7671f3e5126e, server=jenkins-hbase20.apache.org,44325,1689218176275}] 2023-07-13 03:16:26,921 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close f5d94aa765f42c2129da7671f3e5126e 2023-07-13 03:16:26,922 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing f5d94aa765f42c2129da7671f3e5126e, disabling compactions & flushes 2023-07-13 03:16:26,922 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689218186026.f5d94aa765f42c2129da7671f3e5126e. 2023-07-13 03:16:26,922 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689218186026.f5d94aa765f42c2129da7671f3e5126e. 2023-07-13 03:16:26,922 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689218186026.f5d94aa765f42c2129da7671f3e5126e. after waiting 0 ms 2023-07-13 03:16:26,922 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689218186026.f5d94aa765f42c2129da7671f3e5126e. 2023-07-13 03:16:26,927 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testFailRemoveGroup/f5d94aa765f42c2129da7671f3e5126e/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-13 03:16:26,928 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689218186026.f5d94aa765f42c2129da7671f3e5126e. 2023-07-13 03:16:26,928 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for f5d94aa765f42c2129da7671f3e5126e: 2023-07-13 03:16:26,928 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(3513): Adding f5d94aa765f42c2129da7671f3e5126e move to jenkins-hbase20.apache.org,37181,1689218172183 record at close sequenceid=2 2023-07-13 03:16:26,930 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed f5d94aa765f42c2129da7671f3e5126e 2023-07-13 03:16:26,931 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=90 updating hbase:meta row=f5d94aa765f42c2129da7671f3e5126e, regionState=CLOSED 2023-07-13 03:16:26,931 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689218186026.f5d94aa765f42c2129da7671f3e5126e.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689218186931"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689218186931"}]},"ts":"1689218186931"} 2023-07-13 03:16:26,935 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=91, resume processing ppid=90 2023-07-13 03:16:26,935 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=91, ppid=90, state=SUCCESS; CloseRegionProcedure f5d94aa765f42c2129da7671f3e5126e, server=jenkins-hbase20.apache.org,44325,1689218176275 in 168 msec 2023-07-13 03:16:26,936 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=90, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=f5d94aa765f42c2129da7671f3e5126e, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase20.apache.org,37181,1689218172183; forceNewPlan=false, retain=false 2023-07-13 03:16:27,086 INFO [jenkins-hbase20:33491] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-13 03:16:27,086 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=90 updating hbase:meta row=f5d94aa765f42c2129da7671f3e5126e, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,37181,1689218172183 2023-07-13 03:16:27,087 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689218186026.f5d94aa765f42c2129da7671f3e5126e.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689218187086"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689218187086"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689218187086"}]},"ts":"1689218187086"} 2023-07-13 03:16:27,088 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=92, ppid=90, state=RUNNABLE; OpenRegionProcedure f5d94aa765f42c2129da7671f3e5126e, server=jenkins-hbase20.apache.org,37181,1689218172183}] 2023-07-13 03:16:27,244 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689218186026.f5d94aa765f42c2129da7671f3e5126e. 2023-07-13 03:16:27,244 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f5d94aa765f42c2129da7671f3e5126e, NAME => 'Group_testFailRemoveGroup,,1689218186026.f5d94aa765f42c2129da7671f3e5126e.', STARTKEY => '', ENDKEY => ''} 2023-07-13 03:16:27,245 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup f5d94aa765f42c2129da7671f3e5126e 2023-07-13 03:16:27,245 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689218186026.f5d94aa765f42c2129da7671f3e5126e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 03:16:27,245 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for f5d94aa765f42c2129da7671f3e5126e 2023-07-13 03:16:27,245 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for f5d94aa765f42c2129da7671f3e5126e 2023-07-13 03:16:27,247 INFO [StoreOpener-f5d94aa765f42c2129da7671f3e5126e-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region f5d94aa765f42c2129da7671f3e5126e 2023-07-13 03:16:27,248 DEBUG [StoreOpener-f5d94aa765f42c2129da7671f3e5126e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testFailRemoveGroup/f5d94aa765f42c2129da7671f3e5126e/f 2023-07-13 03:16:27,248 DEBUG [StoreOpener-f5d94aa765f42c2129da7671f3e5126e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testFailRemoveGroup/f5d94aa765f42c2129da7671f3e5126e/f 2023-07-13 03:16:27,248 INFO [StoreOpener-f5d94aa765f42c2129da7671f3e5126e-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f5d94aa765f42c2129da7671f3e5126e columnFamilyName f 2023-07-13 03:16:27,249 INFO [StoreOpener-f5d94aa765f42c2129da7671f3e5126e-1] regionserver.HStore(310): Store=f5d94aa765f42c2129da7671f3e5126e/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 03:16:27,250 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testFailRemoveGroup/f5d94aa765f42c2129da7671f3e5126e 2023-07-13 03:16:27,251 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testFailRemoveGroup/f5d94aa765f42c2129da7671f3e5126e 2023-07-13 03:16:27,254 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for f5d94aa765f42c2129da7671f3e5126e 2023-07-13 03:16:27,255 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened f5d94aa765f42c2129da7671f3e5126e; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11611056640, jitterRate=0.08136391639709473}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 03:16:27,255 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for f5d94aa765f42c2129da7671f3e5126e: 2023-07-13 03:16:27,256 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689218186026.f5d94aa765f42c2129da7671f3e5126e., pid=92, masterSystemTime=1689218187240 2023-07-13 03:16:27,257 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689218186026.f5d94aa765f42c2129da7671f3e5126e. 2023-07-13 03:16:27,257 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689218186026.f5d94aa765f42c2129da7671f3e5126e. 2023-07-13 03:16:27,257 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=90 updating hbase:meta row=f5d94aa765f42c2129da7671f3e5126e, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase20.apache.org,37181,1689218172183 2023-07-13 03:16:27,258 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689218186026.f5d94aa765f42c2129da7671f3e5126e.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689218187257"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689218187257"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689218187257"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689218187257"}]},"ts":"1689218187257"} 2023-07-13 03:16:27,261 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=92, resume processing ppid=90 2023-07-13 03:16:27,261 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=92, ppid=90, state=SUCCESS; OpenRegionProcedure f5d94aa765f42c2129da7671f3e5126e, server=jenkins-hbase20.apache.org,37181,1689218172183 in 171 msec 2023-07-13 03:16:27,262 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=90, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=f5d94aa765f42c2129da7671f3e5126e, REOPEN/MOVE in 501 msec 2023-07-13 03:16:27,763 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] procedure.ProcedureSyncWait(216): waitFor pid=90 2023-07-13 03:16:27,763 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testFailRemoveGroup] moved to target group bar. 2023-07-13 03:16:27,763 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-13 03:16:27,768 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:27,768 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:27,771 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=bar 2023-07-13 03:16:27,771 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 03:16:27,772 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup bar 2023-07-13 03:16:27,772 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 1 tables; you must remove these tables from the rsgroup before the rsgroup can be removed. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:490) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 03:16:27,772 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] ipc.CallRunner(144): callId: 286 service: MasterService methodName: ExecMasterService size: 85 connection: 148.251.75.209:45566 deadline: 1689219387772, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 1 tables; you must remove these tables from the rsgroup before the rsgroup can be removed. 2023-07-13 03:16:27,773 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:32993, jenkins-hbase20.apache.org:37181, jenkins-hbase20.apache.org:44171] to rsgroup default 2023-07-13 03:16:27,773 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Cannot leave a RSGroup bar that contains tables without servers to host them. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:428) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 03:16:27,773 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] ipc.CallRunner(144): callId: 288 service: MasterService methodName: ExecMasterService size: 191 connection: 148.251.75.209:45566 deadline: 1689219387773, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Cannot leave a RSGroup bar that contains tables without servers to host them. 2023-07-13 03:16:27,776 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [Group_testFailRemoveGroup] to rsgroup default 2023-07-13 03:16:27,778 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:27,779 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-13 03:16:27,779 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 03:16:27,780 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 03:16:27,781 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testFailRemoveGroup to RSGroup default 2023-07-13 03:16:27,781 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(345): Moving region f5d94aa765f42c2129da7671f3e5126e to RSGroup default 2023-07-13 03:16:27,782 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] procedure2.ProcedureExecutor(1029): Stored pid=93, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=f5d94aa765f42c2129da7671f3e5126e, REOPEN/MOVE 2023-07-13 03:16:27,782 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-13 03:16:27,783 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=93, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=f5d94aa765f42c2129da7671f3e5126e, REOPEN/MOVE 2023-07-13 03:16:27,784 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=93 updating hbase:meta row=f5d94aa765f42c2129da7671f3e5126e, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,37181,1689218172183 2023-07-13 03:16:27,784 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689218186026.f5d94aa765f42c2129da7671f3e5126e.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689218187784"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689218187784"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689218187784"}]},"ts":"1689218187784"} 2023-07-13 03:16:27,785 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=94, ppid=93, state=RUNNABLE; CloseRegionProcedure f5d94aa765f42c2129da7671f3e5126e, server=jenkins-hbase20.apache.org,37181,1689218172183}] 2023-07-13 03:16:27,939 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close f5d94aa765f42c2129da7671f3e5126e 2023-07-13 03:16:27,940 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing f5d94aa765f42c2129da7671f3e5126e, disabling compactions & flushes 2023-07-13 03:16:27,940 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689218186026.f5d94aa765f42c2129da7671f3e5126e. 2023-07-13 03:16:27,940 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689218186026.f5d94aa765f42c2129da7671f3e5126e. 2023-07-13 03:16:27,940 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689218186026.f5d94aa765f42c2129da7671f3e5126e. after waiting 0 ms 2023-07-13 03:16:27,940 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689218186026.f5d94aa765f42c2129da7671f3e5126e. 2023-07-13 03:16:27,944 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testFailRemoveGroup/f5d94aa765f42c2129da7671f3e5126e/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-13 03:16:27,945 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689218186026.f5d94aa765f42c2129da7671f3e5126e. 2023-07-13 03:16:27,945 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for f5d94aa765f42c2129da7671f3e5126e: 2023-07-13 03:16:27,945 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(3513): Adding f5d94aa765f42c2129da7671f3e5126e move to jenkins-hbase20.apache.org,44325,1689218176275 record at close sequenceid=5 2023-07-13 03:16:27,947 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed f5d94aa765f42c2129da7671f3e5126e 2023-07-13 03:16:27,947 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=93 updating hbase:meta row=f5d94aa765f42c2129da7671f3e5126e, regionState=CLOSED 2023-07-13 03:16:27,947 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689218186026.f5d94aa765f42c2129da7671f3e5126e.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689218187947"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689218187947"}]},"ts":"1689218187947"} 2023-07-13 03:16:27,951 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=94, resume processing ppid=93 2023-07-13 03:16:27,951 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=94, ppid=93, state=SUCCESS; CloseRegionProcedure f5d94aa765f42c2129da7671f3e5126e, server=jenkins-hbase20.apache.org,37181,1689218172183 in 163 msec 2023-07-13 03:16:27,955 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=93, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=f5d94aa765f42c2129da7671f3e5126e, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase20.apache.org,44325,1689218176275; forceNewPlan=false, retain=false 2023-07-13 03:16:28,106 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=93 updating hbase:meta row=f5d94aa765f42c2129da7671f3e5126e, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,44325,1689218176275 2023-07-13 03:16:28,106 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689218186026.f5d94aa765f42c2129da7671f3e5126e.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689218188106"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689218188106"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689218188106"}]},"ts":"1689218188106"} 2023-07-13 03:16:28,108 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=95, ppid=93, state=RUNNABLE; OpenRegionProcedure f5d94aa765f42c2129da7671f3e5126e, server=jenkins-hbase20.apache.org,44325,1689218176275}] 2023-07-13 03:16:28,264 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689218186026.f5d94aa765f42c2129da7671f3e5126e. 2023-07-13 03:16:28,264 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f5d94aa765f42c2129da7671f3e5126e, NAME => 'Group_testFailRemoveGroup,,1689218186026.f5d94aa765f42c2129da7671f3e5126e.', STARTKEY => '', ENDKEY => ''} 2023-07-13 03:16:28,264 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup f5d94aa765f42c2129da7671f3e5126e 2023-07-13 03:16:28,265 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689218186026.f5d94aa765f42c2129da7671f3e5126e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 03:16:28,265 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for f5d94aa765f42c2129da7671f3e5126e 2023-07-13 03:16:28,265 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for f5d94aa765f42c2129da7671f3e5126e 2023-07-13 03:16:28,266 INFO [StoreOpener-f5d94aa765f42c2129da7671f3e5126e-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region f5d94aa765f42c2129da7671f3e5126e 2023-07-13 03:16:28,268 DEBUG [StoreOpener-f5d94aa765f42c2129da7671f3e5126e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testFailRemoveGroup/f5d94aa765f42c2129da7671f3e5126e/f 2023-07-13 03:16:28,268 DEBUG [StoreOpener-f5d94aa765f42c2129da7671f3e5126e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testFailRemoveGroup/f5d94aa765f42c2129da7671f3e5126e/f 2023-07-13 03:16:28,268 INFO [StoreOpener-f5d94aa765f42c2129da7671f3e5126e-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f5d94aa765f42c2129da7671f3e5126e columnFamilyName f 2023-07-13 03:16:28,269 INFO [StoreOpener-f5d94aa765f42c2129da7671f3e5126e-1] regionserver.HStore(310): Store=f5d94aa765f42c2129da7671f3e5126e/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 03:16:28,269 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testFailRemoveGroup/f5d94aa765f42c2129da7671f3e5126e 2023-07-13 03:16:28,271 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testFailRemoveGroup/f5d94aa765f42c2129da7671f3e5126e 2023-07-13 03:16:28,273 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for f5d94aa765f42c2129da7671f3e5126e 2023-07-13 03:16:28,274 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened f5d94aa765f42c2129da7671f3e5126e; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9832123520, jitterRate=-0.08431214094161987}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 03:16:28,274 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for f5d94aa765f42c2129da7671f3e5126e: 2023-07-13 03:16:28,275 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689218186026.f5d94aa765f42c2129da7671f3e5126e., pid=95, masterSystemTime=1689218188259 2023-07-13 03:16:28,276 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689218186026.f5d94aa765f42c2129da7671f3e5126e. 2023-07-13 03:16:28,277 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689218186026.f5d94aa765f42c2129da7671f3e5126e. 2023-07-13 03:16:28,277 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=93 updating hbase:meta row=f5d94aa765f42c2129da7671f3e5126e, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase20.apache.org,44325,1689218176275 2023-07-13 03:16:28,277 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689218186026.f5d94aa765f42c2129da7671f3e5126e.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689218188277"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689218188277"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689218188277"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689218188277"}]},"ts":"1689218188277"} 2023-07-13 03:16:28,282 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=95, resume processing ppid=93 2023-07-13 03:16:28,283 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=95, ppid=93, state=SUCCESS; OpenRegionProcedure f5d94aa765f42c2129da7671f3e5126e, server=jenkins-hbase20.apache.org,44325,1689218176275 in 172 msec 2023-07-13 03:16:28,284 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=93, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=f5d94aa765f42c2129da7671f3e5126e, REOPEN/MOVE in 501 msec 2023-07-13 03:16:28,537 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'Group_testFailRemoveGroup' 2023-07-13 03:16:28,537 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-13 03:16:28,538 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-13 03:16:28,783 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] procedure.ProcedureSyncWait(216): waitFor pid=93 2023-07-13 03:16:28,783 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testFailRemoveGroup] moved to target group default. 2023-07-13 03:16:28,783 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-13 03:16:28,787 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:28,787 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:28,790 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup bar 2023-07-13 03:16:28,791 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 3 servers; you must remove these servers from the RSGroup beforethe RSGroup can be removed. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:496) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 03:16:28,791 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] ipc.CallRunner(144): callId: 295 service: MasterService methodName: ExecMasterService size: 85 connection: 148.251.75.209:45566 deadline: 1689219388790, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 3 servers; you must remove these servers from the RSGroup beforethe RSGroup can be removed. 2023-07-13 03:16:28,792 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:32993, jenkins-hbase20.apache.org:37181, jenkins-hbase20.apache.org:44171] to rsgroup default 2023-07-13 03:16:28,795 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:28,795 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-13 03:16:28,798 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 03:16:28,798 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 03:16:28,800 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group bar, current retry=0 2023-07-13 03:16:28,800 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase20.apache.org,32993,1689218172776, jenkins-hbase20.apache.org,37181,1689218172183, jenkins-hbase20.apache.org,44171,1689218172445] are moved back to bar 2023-07-13 03:16:28,800 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(438): Move servers done: bar => default 2023-07-13 03:16:28,800 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-13 03:16:28,805 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:28,806 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:28,810 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup bar 2023-07-13 03:16:28,819 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:28,820 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 03:16:28,820 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-13 03:16:28,839 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 03:16:28,843 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:28,844 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:28,847 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:28,847 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:28,850 INFO [Listener at localhost.localdomain/36261] client.HBaseAdmin$15(890): Started disable of Group_testFailRemoveGroup 2023-07-13 03:16:28,850 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.HMaster$11(2418): Client=jenkins//148.251.75.209 disable Group_testFailRemoveGroup 2023-07-13 03:16:28,851 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] procedure2.ProcedureExecutor(1029): Stored pid=96, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testFailRemoveGroup 2023-07-13 03:16:28,855 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(1230): Checking to see if procedure is done pid=96 2023-07-13 03:16:28,860 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689218188860"}]},"ts":"1689218188860"} 2023-07-13 03:16:28,862 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=DISABLING in hbase:meta 2023-07-13 03:16:28,863 INFO [PEWorker-1] procedure.DisableTableProcedure(293): Set Group_testFailRemoveGroup to state=DISABLING 2023-07-13 03:16:28,864 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=97, ppid=96, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=f5d94aa765f42c2129da7671f3e5126e, UNASSIGN}] 2023-07-13 03:16:28,867 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=97, ppid=96, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=f5d94aa765f42c2129da7671f3e5126e, UNASSIGN 2023-07-13 03:16:28,875 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=97 updating hbase:meta row=f5d94aa765f42c2129da7671f3e5126e, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,44325,1689218176275 2023-07-13 03:16:28,875 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689218186026.f5d94aa765f42c2129da7671f3e5126e.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689218188874"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689218188874"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689218188874"}]},"ts":"1689218188874"} 2023-07-13 03:16:28,877 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=98, ppid=97, state=RUNNABLE; CloseRegionProcedure f5d94aa765f42c2129da7671f3e5126e, server=jenkins-hbase20.apache.org,44325,1689218176275}] 2023-07-13 03:16:28,956 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(1230): Checking to see if procedure is done pid=96 2023-07-13 03:16:29,030 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close f5d94aa765f42c2129da7671f3e5126e 2023-07-13 03:16:29,031 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing f5d94aa765f42c2129da7671f3e5126e, disabling compactions & flushes 2023-07-13 03:16:29,031 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689218186026.f5d94aa765f42c2129da7671f3e5126e. 2023-07-13 03:16:29,031 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689218186026.f5d94aa765f42c2129da7671f3e5126e. 2023-07-13 03:16:29,031 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689218186026.f5d94aa765f42c2129da7671f3e5126e. after waiting 0 ms 2023-07-13 03:16:29,031 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689218186026.f5d94aa765f42c2129da7671f3e5126e. 2023-07-13 03:16:29,035 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testFailRemoveGroup/f5d94aa765f42c2129da7671f3e5126e/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-13 03:16:29,035 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689218186026.f5d94aa765f42c2129da7671f3e5126e. 2023-07-13 03:16:29,035 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for f5d94aa765f42c2129da7671f3e5126e: 2023-07-13 03:16:29,037 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed f5d94aa765f42c2129da7671f3e5126e 2023-07-13 03:16:29,037 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=97 updating hbase:meta row=f5d94aa765f42c2129da7671f3e5126e, regionState=CLOSED 2023-07-13 03:16:29,037 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689218186026.f5d94aa765f42c2129da7671f3e5126e.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689218189037"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689218189037"}]},"ts":"1689218189037"} 2023-07-13 03:16:29,041 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=98, resume processing ppid=97 2023-07-13 03:16:29,041 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=98, ppid=97, state=SUCCESS; CloseRegionProcedure f5d94aa765f42c2129da7671f3e5126e, server=jenkins-hbase20.apache.org,44325,1689218176275 in 162 msec 2023-07-13 03:16:29,205 INFO [AsyncFSWAL-0-hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/MasterData-prefix:jenkins-hbase20.apache.org,33491,1689218169949] wal.AbstractFSWAL(1141): Slow sync cost: 164 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:37299,DS-7325d1d6-f32c-4e9f-9d47-b89ecc0dcb96,DISK], DatanodeInfoWithStorage[127.0.0.1:43963,DS-72306ca7-9fea-43e7-ac6c-a3e6f88d5ecf,DISK], DatanodeInfoWithStorage[127.0.0.1:43409,DS-f2641e55-6772-43f9-9084-b6bc41af5cda,DISK]] 2023-07-13 03:16:29,205 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(1230): Checking to see if procedure is done pid=96 2023-07-13 03:16:29,207 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=97, resume processing ppid=96 2023-07-13 03:16:29,207 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=97, ppid=96, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=f5d94aa765f42c2129da7671f3e5126e, UNASSIGN in 177 msec 2023-07-13 03:16:29,208 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689218189208"}]},"ts":"1689218189208"} 2023-07-13 03:16:29,209 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=DISABLED in hbase:meta 2023-07-13 03:16:29,211 INFO [PEWorker-1] procedure.DisableTableProcedure(305): Set Group_testFailRemoveGroup to state=DISABLED 2023-07-13 03:16:29,213 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=96, state=SUCCESS; DisableTableProcedure table=Group_testFailRemoveGroup in 361 msec 2023-07-13 03:16:29,509 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(1230): Checking to see if procedure is done pid=96 2023-07-13 03:16:29,510 INFO [Listener at localhost.localdomain/36261] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testFailRemoveGroup, procId: 96 completed 2023-07-13 03:16:29,512 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.HMaster$5(2228): Client=jenkins//148.251.75.209 delete Group_testFailRemoveGroup 2023-07-13 03:16:29,515 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] procedure2.ProcedureExecutor(1029): Stored pid=99, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-13 03:16:29,517 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=99, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-13 03:16:29,517 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testFailRemoveGroup' from rsgroup 'default' 2023-07-13 03:16:29,519 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=99, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-13 03:16:29,521 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:29,521 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 03:16:29,522 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 03:16:29,525 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testFailRemoveGroup/f5d94aa765f42c2129da7671f3e5126e 2023-07-13 03:16:29,528 DEBUG [HFileArchiver-6] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testFailRemoveGroup/f5d94aa765f42c2129da7671f3e5126e/f, FileablePath, hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testFailRemoveGroup/f5d94aa765f42c2129da7671f3e5126e/recovered.edits] 2023-07-13 03:16:29,531 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(1230): Checking to see if procedure is done pid=99 2023-07-13 03:16:29,535 DEBUG [HFileArchiver-6] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testFailRemoveGroup/f5d94aa765f42c2129da7671f3e5126e/recovered.edits/10.seqid to hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/archive/data/default/Group_testFailRemoveGroup/f5d94aa765f42c2129da7671f3e5126e/recovered.edits/10.seqid 2023-07-13 03:16:29,536 DEBUG [HFileArchiver-6] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testFailRemoveGroup/f5d94aa765f42c2129da7671f3e5126e 2023-07-13 03:16:29,536 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived Group_testFailRemoveGroup regions 2023-07-13 03:16:29,541 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=99, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-13 03:16:29,557 WARN [PEWorker-4] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of Group_testFailRemoveGroup from hbase:meta 2023-07-13 03:16:29,561 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(421): Removing 'Group_testFailRemoveGroup' descriptor. 2023-07-13 03:16:29,562 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=99, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-13 03:16:29,563 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(411): Removing 'Group_testFailRemoveGroup' from region states. 2023-07-13 03:16:29,563 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testFailRemoveGroup,,1689218186026.f5d94aa765f42c2129da7671f3e5126e.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689218189563"}]},"ts":"9223372036854775807"} 2023-07-13 03:16:29,574 INFO [PEWorker-4] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-13 03:16:29,575 DEBUG [PEWorker-4] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => f5d94aa765f42c2129da7671f3e5126e, NAME => 'Group_testFailRemoveGroup,,1689218186026.f5d94aa765f42c2129da7671f3e5126e.', STARTKEY => '', ENDKEY => ''}] 2023-07-13 03:16:29,575 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(415): Marking 'Group_testFailRemoveGroup' as deleted. 2023-07-13 03:16:29,575 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689218189575"}]},"ts":"9223372036854775807"} 2023-07-13 03:16:29,577 INFO [PEWorker-4] hbase.MetaTableAccessor(1658): Deleted table Group_testFailRemoveGroup state from META 2023-07-13 03:16:29,579 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(130): Finished pid=99, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-13 03:16:29,581 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=99, state=SUCCESS; DeleteTableProcedure table=Group_testFailRemoveGroup in 67 msec 2023-07-13 03:16:29,639 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(1230): Checking to see if procedure is done pid=99 2023-07-13 03:16:29,639 INFO [Listener at localhost.localdomain/36261] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testFailRemoveGroup, procId: 99 completed 2023-07-13 03:16:29,646 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:29,646 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:29,648 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-13 03:16:29,648 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 03:16:29,648 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-13 03:16:29,651 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-13 03:16:29,651 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-13 03:16:29,652 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup master 2023-07-13 03:16:29,656 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:29,656 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-13 03:16:29,657 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 03:16:29,662 INFO [Listener at localhost.localdomain/36261] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-13 03:16:29,663 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-13 03:16:29,666 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:29,667 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 03:16:29,671 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 03:16:29,674 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 03:16:29,685 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:29,685 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:29,688 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:33491] to rsgroup master 2023-07-13 03:16:29,688 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33491 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 03:16:29,688 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] ipc.CallRunner(144): callId: 345 service: MasterService methodName: ExecMasterService size: 120 connection: 148.251.75.209:45566 deadline: 1689219389687, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33491 is either offline or it does not exist. 2023-07-13 03:16:29,688 WARN [Listener at localhost.localdomain/36261] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33491 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33491 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-13 03:16:29,690 INFO [Listener at localhost.localdomain/36261] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 03:16:29,691 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:29,691 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:29,692 INFO [Listener at localhost.localdomain/36261] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:32993, jenkins-hbase20.apache.org:37181, jenkins-hbase20.apache.org:44171, jenkins-hbase20.apache.org:44325], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-13 03:16:29,693 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-13 03:16:29,693 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 03:16:29,722 INFO [Listener at localhost.localdomain/36261] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testFailRemoveGroup Thread=522 (was 507) Potentially hanging thread: PacketResponder: BP-28934839-148.251.75.209-1689218166310:blk_1073741862_1038, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2cd1b0c2-shared-pool-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1866409043_17 at /127.0.0.1:36838 [Receiving block BP-28934839-148.251.75.209-1689218166310:blk_1073741862_1038] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1866409043_17 at /127.0.0.1:36860 [Waiting for operation #5] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2cd1b0c2-shared-pool-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1866409043_17 at /127.0.0.1:59198 [Receiving block BP-28934839-148.251.75.209-1689218166310:blk_1073741862_1038] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1866409043_17 at /127.0.0.1:49310 [Receiving block BP-28934839-148.251.75.209-1689218166310:blk_1073741862_1038] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x4b805d89-shared-pool-15 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x4b805d89-shared-pool-16 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2cd1b0c2-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x4b805d89-shared-pool-13 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-11 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2b42746c-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x4b805d89-shared-pool-17 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x4b805d89-shared-pool-14 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2cd1b0c2-shared-pool-13 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-28934839-148.251.75.209-1689218166310:blk_1073741862_1038, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2cd1b0c2-shared-pool-12 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9a468761-9605-9fc9-5826-02909870e5fb/cluster_5365af11-0016-b950-934d-d6cdde7e87b7/dfs/data/data3/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692-prefix:jenkins-hbase20.apache.org,44325,1689218176275.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9a468761-9605-9fc9-5826-02909870e5fb/cluster_5365af11-0016-b950-934d-d6cdde7e87b7/dfs/data/data4/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1148779107_17 at /127.0.0.1:49312 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x4b805d89-shared-pool-18 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-10 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-28934839-148.251.75.209-1689218166310:blk_1073741862_1038, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1866409043_17 at /127.0.0.1:59208 [Waiting for operation #7] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=824 (was 811) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=500 (was 509), ProcessCount=170 (was 170), AvailableMemoryMB=3529 (was 3651) 2023-07-13 03:16:29,723 WARN [Listener at localhost.localdomain/36261] hbase.ResourceChecker(130): Thread=522 is superior to 500 2023-07-13 03:16:29,744 INFO [Listener at localhost.localdomain/36261] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testMultiTableMove Thread=522, OpenFileDescriptor=824, MaxFileDescriptor=60000, SystemLoadAverage=500, ProcessCount=170, AvailableMemoryMB=3526 2023-07-13 03:16:29,744 WARN [Listener at localhost.localdomain/36261] hbase.ResourceChecker(130): Thread=522 is superior to 500 2023-07-13 03:16:29,745 INFO [Listener at localhost.localdomain/36261] rsgroup.TestRSGroupsBase(132): testMultiTableMove 2023-07-13 03:16:29,753 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:29,753 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:29,754 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-13 03:16:29,754 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 03:16:29,754 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-13 03:16:29,755 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-13 03:16:29,755 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-13 03:16:29,762 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup master 2023-07-13 03:16:29,766 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:29,767 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-13 03:16:29,768 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 03:16:29,771 INFO [Listener at localhost.localdomain/36261] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-13 03:16:29,772 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-13 03:16:29,774 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:29,774 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 03:16:29,780 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 03:16:29,781 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 03:16:29,786 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:29,786 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:29,790 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:33491] to rsgroup master 2023-07-13 03:16:29,790 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33491 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 03:16:29,790 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] ipc.CallRunner(144): callId: 373 service: MasterService methodName: ExecMasterService size: 120 connection: 148.251.75.209:45566 deadline: 1689219389790, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33491 is either offline or it does not exist. 2023-07-13 03:16:29,790 WARN [Listener at localhost.localdomain/36261] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33491 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33491 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-13 03:16:29,795 INFO [Listener at localhost.localdomain/36261] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 03:16:29,796 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:29,796 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:29,797 INFO [Listener at localhost.localdomain/36261] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:32993, jenkins-hbase20.apache.org:37181, jenkins-hbase20.apache.org:44171, jenkins-hbase20.apache.org:44325], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-13 03:16:29,797 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-13 03:16:29,797 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 03:16:29,798 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-13 03:16:29,799 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 03:16:29,800 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup Group_testMultiTableMove_1942187643 2023-07-13 03:16:29,802 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:29,807 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1942187643 2023-07-13 03:16:29,810 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 03:16:29,810 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 03:16:29,812 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 03:16:29,816 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:29,816 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:29,830 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:32993] to rsgroup Group_testMultiTableMove_1942187643 2023-07-13 03:16:29,834 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:29,835 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1942187643 2023-07-13 03:16:29,835 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 03:16:29,836 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 03:16:29,837 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-13 03:16:29,837 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase20.apache.org,32993,1689218172776] are moved back to default 2023-07-13 03:16:29,837 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testMultiTableMove_1942187643 2023-07-13 03:16:29,837 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-13 03:16:29,842 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:29,842 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:29,845 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=Group_testMultiTableMove_1942187643 2023-07-13 03:16:29,845 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 03:16:29,848 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.HMaster$4(2112): Client=jenkins//148.251.75.209 create 'GrouptestMultiTableMoveA', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-13 03:16:29,849 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] procedure2.ProcedureExecutor(1029): Stored pid=100, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=GrouptestMultiTableMoveA 2023-07-13 03:16:29,852 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=100, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_PRE_OPERATION 2023-07-13 03:16:29,852 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(700): Client=jenkins//148.251.75.209 procedure request for creating table: namespace: "default" qualifier: "GrouptestMultiTableMoveA" procId is: 100 2023-07-13 03:16:29,856 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(1230): Checking to see if procedure is done pid=100 2023-07-13 03:16:29,856 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:29,857 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1942187643 2023-07-13 03:16:29,858 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 03:16:29,858 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 03:16:29,863 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=100, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-13 03:16:29,865 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/GrouptestMultiTableMoveA/874e08371a3a7899ac31bb3db63ba4fd 2023-07-13 03:16:29,866 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/GrouptestMultiTableMoveA/874e08371a3a7899ac31bb3db63ba4fd empty. 2023-07-13 03:16:29,867 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/GrouptestMultiTableMoveA/874e08371a3a7899ac31bb3db63ba4fd 2023-07-13 03:16:29,867 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveA regions 2023-07-13 03:16:29,934 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/GrouptestMultiTableMoveA/.tabledesc/.tableinfo.0000000001 2023-07-13 03:16:29,935 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(7675): creating {ENCODED => 874e08371a3a7899ac31bb3db63ba4fd, NAME => 'GrouptestMultiTableMoveA,,1689218189848.874e08371a3a7899ac31bb3db63ba4fd.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='GrouptestMultiTableMoveA', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp 2023-07-13 03:16:29,959 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(1230): Checking to see if procedure is done pid=100 2023-07-13 03:16:29,960 WARN [DataStreamer for file /user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/GrouptestMultiTableMoveA/874e08371a3a7899ac31bb3db63ba4fd/.regioninfo] hdfs.DataStreamer(982): Caught exception java.lang.InterruptedException at java.lang.Object.wait(Native Method) at java.lang.Thread.join(Thread.java:1257) at java.lang.Thread.join(Thread.java:1331) at org.apache.hadoop.hdfs.DataStreamer.closeResponder(DataStreamer.java:980) at org.apache.hadoop.hdfs.DataStreamer.endBlock(DataStreamer.java:630) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:807) 2023-07-13 03:16:29,961 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689218189848.874e08371a3a7899ac31bb3db63ba4fd.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 03:16:29,961 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1604): Closing 874e08371a3a7899ac31bb3db63ba4fd, disabling compactions & flushes 2023-07-13 03:16:29,961 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689218189848.874e08371a3a7899ac31bb3db63ba4fd. 2023-07-13 03:16:29,961 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689218189848.874e08371a3a7899ac31bb3db63ba4fd. 2023-07-13 03:16:29,961 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689218189848.874e08371a3a7899ac31bb3db63ba4fd. after waiting 0 ms 2023-07-13 03:16:29,961 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689218189848.874e08371a3a7899ac31bb3db63ba4fd. 2023-07-13 03:16:29,961 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689218189848.874e08371a3a7899ac31bb3db63ba4fd. 2023-07-13 03:16:29,961 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1558): Region close journal for 874e08371a3a7899ac31bb3db63ba4fd: 2023-07-13 03:16:29,968 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=100, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_ADD_TO_META 2023-07-13 03:16:29,970 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689218189848.874e08371a3a7899ac31bb3db63ba4fd.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689218189969"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689218189969"}]},"ts":"1689218189969"} 2023-07-13 03:16:29,974 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-13 03:16:29,975 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=100, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-13 03:16:29,976 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689218189975"}]},"ts":"1689218189975"} 2023-07-13 03:16:29,978 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=ENABLING in hbase:meta 2023-07-13 03:16:29,981 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-13 03:16:29,981 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-13 03:16:29,981 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-13 03:16:29,981 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-13 03:16:29,981 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-13 03:16:29,982 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=101, ppid=100, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=874e08371a3a7899ac31bb3db63ba4fd, ASSIGN}] 2023-07-13 03:16:29,984 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=101, ppid=100, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=874e08371a3a7899ac31bb3db63ba4fd, ASSIGN 2023-07-13 03:16:29,985 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=101, ppid=100, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=874e08371a3a7899ac31bb3db63ba4fd, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,44171,1689218172445; forceNewPlan=false, retain=false 2023-07-13 03:16:30,135 INFO [jenkins-hbase20:33491] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-13 03:16:30,137 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=874e08371a3a7899ac31bb3db63ba4fd, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,44171,1689218172445 2023-07-13 03:16:30,137 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689218189848.874e08371a3a7899ac31bb3db63ba4fd.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689218190137"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689218190137"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689218190137"}]},"ts":"1689218190137"} 2023-07-13 03:16:30,139 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=102, ppid=101, state=RUNNABLE; OpenRegionProcedure 874e08371a3a7899ac31bb3db63ba4fd, server=jenkins-hbase20.apache.org,44171,1689218172445}] 2023-07-13 03:16:30,161 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(1230): Checking to see if procedure is done pid=100 2023-07-13 03:16:30,295 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveA,,1689218189848.874e08371a3a7899ac31bb3db63ba4fd. 2023-07-13 03:16:30,296 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 874e08371a3a7899ac31bb3db63ba4fd, NAME => 'GrouptestMultiTableMoveA,,1689218189848.874e08371a3a7899ac31bb3db63ba4fd.', STARTKEY => '', ENDKEY => ''} 2023-07-13 03:16:30,296 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveA 874e08371a3a7899ac31bb3db63ba4fd 2023-07-13 03:16:30,296 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689218189848.874e08371a3a7899ac31bb3db63ba4fd.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 03:16:30,296 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 874e08371a3a7899ac31bb3db63ba4fd 2023-07-13 03:16:30,296 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 874e08371a3a7899ac31bb3db63ba4fd 2023-07-13 03:16:30,297 INFO [StoreOpener-874e08371a3a7899ac31bb3db63ba4fd-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 874e08371a3a7899ac31bb3db63ba4fd 2023-07-13 03:16:30,299 DEBUG [StoreOpener-874e08371a3a7899ac31bb3db63ba4fd-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/GrouptestMultiTableMoveA/874e08371a3a7899ac31bb3db63ba4fd/f 2023-07-13 03:16:30,299 DEBUG [StoreOpener-874e08371a3a7899ac31bb3db63ba4fd-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/GrouptestMultiTableMoveA/874e08371a3a7899ac31bb3db63ba4fd/f 2023-07-13 03:16:30,299 INFO [StoreOpener-874e08371a3a7899ac31bb3db63ba4fd-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 874e08371a3a7899ac31bb3db63ba4fd columnFamilyName f 2023-07-13 03:16:30,300 INFO [StoreOpener-874e08371a3a7899ac31bb3db63ba4fd-1] regionserver.HStore(310): Store=874e08371a3a7899ac31bb3db63ba4fd/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 03:16:30,301 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/GrouptestMultiTableMoveA/874e08371a3a7899ac31bb3db63ba4fd 2023-07-13 03:16:30,301 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/GrouptestMultiTableMoveA/874e08371a3a7899ac31bb3db63ba4fd 2023-07-13 03:16:30,304 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 874e08371a3a7899ac31bb3db63ba4fd 2023-07-13 03:16:30,306 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/GrouptestMultiTableMoveA/874e08371a3a7899ac31bb3db63ba4fd/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 03:16:30,307 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 874e08371a3a7899ac31bb3db63ba4fd; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11802195360, jitterRate=0.09916509687900543}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 03:16:30,307 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 874e08371a3a7899ac31bb3db63ba4fd: 2023-07-13 03:16:30,307 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveA,,1689218189848.874e08371a3a7899ac31bb3db63ba4fd., pid=102, masterSystemTime=1689218190292 2023-07-13 03:16:30,309 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveA,,1689218189848.874e08371a3a7899ac31bb3db63ba4fd. 2023-07-13 03:16:30,309 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveA,,1689218189848.874e08371a3a7899ac31bb3db63ba4fd. 2023-07-13 03:16:30,309 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=874e08371a3a7899ac31bb3db63ba4fd, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,44171,1689218172445 2023-07-13 03:16:30,310 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveA,,1689218189848.874e08371a3a7899ac31bb3db63ba4fd.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689218190309"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689218190309"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689218190309"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689218190309"}]},"ts":"1689218190309"} 2023-07-13 03:16:30,314 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=102, resume processing ppid=101 2023-07-13 03:16:30,314 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=102, ppid=101, state=SUCCESS; OpenRegionProcedure 874e08371a3a7899ac31bb3db63ba4fd, server=jenkins-hbase20.apache.org,44171,1689218172445 in 173 msec 2023-07-13 03:16:30,316 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=101, resume processing ppid=100 2023-07-13 03:16:30,316 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=101, ppid=100, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=874e08371a3a7899ac31bb3db63ba4fd, ASSIGN in 333 msec 2023-07-13 03:16:30,317 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=100, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-13 03:16:30,317 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689218190317"}]},"ts":"1689218190317"} 2023-07-13 03:16:30,318 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=ENABLED in hbase:meta 2023-07-13 03:16:30,320 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=100, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_POST_OPERATION 2023-07-13 03:16:30,322 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=100, state=SUCCESS; CreateTableProcedure table=GrouptestMultiTableMoveA in 472 msec 2023-07-13 03:16:30,463 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(1230): Checking to see if procedure is done pid=100 2023-07-13 03:16:30,463 INFO [Listener at localhost.localdomain/36261] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:GrouptestMultiTableMoveA, procId: 100 completed 2023-07-13 03:16:30,463 DEBUG [Listener at localhost.localdomain/36261] hbase.HBaseTestingUtility(3430): Waiting until all regions of table GrouptestMultiTableMoveA get assigned. Timeout = 60000ms 2023-07-13 03:16:30,463 INFO [Listener at localhost.localdomain/36261] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 03:16:30,471 INFO [Listener at localhost.localdomain/36261] hbase.HBaseTestingUtility(3484): All regions for table GrouptestMultiTableMoveA assigned to meta. Checking AM states. 2023-07-13 03:16:30,471 INFO [Listener at localhost.localdomain/36261] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 03:16:30,471 INFO [Listener at localhost.localdomain/36261] hbase.HBaseTestingUtility(3504): All regions for table GrouptestMultiTableMoveA assigned. 2023-07-13 03:16:30,474 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.HMaster$4(2112): Client=jenkins//148.251.75.209 create 'GrouptestMultiTableMoveB', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-13 03:16:30,475 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] procedure2.ProcedureExecutor(1029): Stored pid=103, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=GrouptestMultiTableMoveB 2023-07-13 03:16:30,477 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=103, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_PRE_OPERATION 2023-07-13 03:16:30,477 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(700): Client=jenkins//148.251.75.209 procedure request for creating table: namespace: "default" qualifier: "GrouptestMultiTableMoveB" procId is: 103 2023-07-13 03:16:30,478 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(1230): Checking to see if procedure is done pid=103 2023-07-13 03:16:30,479 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:30,480 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1942187643 2023-07-13 03:16:30,480 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 03:16:30,480 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 03:16:30,482 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=103, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-13 03:16:30,484 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/GrouptestMultiTableMoveB/5c8a6b08d0c711cac3e654805ec9be36 2023-07-13 03:16:30,484 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/GrouptestMultiTableMoveB/5c8a6b08d0c711cac3e654805ec9be36 empty. 2023-07-13 03:16:30,485 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/GrouptestMultiTableMoveB/5c8a6b08d0c711cac3e654805ec9be36 2023-07-13 03:16:30,485 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveB regions 2023-07-13 03:16:30,499 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/GrouptestMultiTableMoveB/.tabledesc/.tableinfo.0000000001 2023-07-13 03:16:30,500 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(7675): creating {ENCODED => 5c8a6b08d0c711cac3e654805ec9be36, NAME => 'GrouptestMultiTableMoveB,,1689218190473.5c8a6b08d0c711cac3e654805ec9be36.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='GrouptestMultiTableMoveB', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp 2023-07-13 03:16:30,511 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689218190473.5c8a6b08d0c711cac3e654805ec9be36.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 03:16:30,511 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1604): Closing 5c8a6b08d0c711cac3e654805ec9be36, disabling compactions & flushes 2023-07-13 03:16:30,511 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689218190473.5c8a6b08d0c711cac3e654805ec9be36. 2023-07-13 03:16:30,511 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689218190473.5c8a6b08d0c711cac3e654805ec9be36. 2023-07-13 03:16:30,511 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689218190473.5c8a6b08d0c711cac3e654805ec9be36. after waiting 0 ms 2023-07-13 03:16:30,511 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689218190473.5c8a6b08d0c711cac3e654805ec9be36. 2023-07-13 03:16:30,511 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689218190473.5c8a6b08d0c711cac3e654805ec9be36. 2023-07-13 03:16:30,511 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1558): Region close journal for 5c8a6b08d0c711cac3e654805ec9be36: 2023-07-13 03:16:30,513 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=103, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_ADD_TO_META 2023-07-13 03:16:30,514 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689218190473.5c8a6b08d0c711cac3e654805ec9be36.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689218190514"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689218190514"}]},"ts":"1689218190514"} 2023-07-13 03:16:30,516 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-13 03:16:30,516 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=103, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-13 03:16:30,517 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689218190517"}]},"ts":"1689218190517"} 2023-07-13 03:16:30,518 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=ENABLING in hbase:meta 2023-07-13 03:16:30,520 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-13 03:16:30,520 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-13 03:16:30,520 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-13 03:16:30,520 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-13 03:16:30,520 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-13 03:16:30,520 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=104, ppid=103, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=5c8a6b08d0c711cac3e654805ec9be36, ASSIGN}] 2023-07-13 03:16:30,522 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=104, ppid=103, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=5c8a6b08d0c711cac3e654805ec9be36, ASSIGN 2023-07-13 03:16:30,523 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=104, ppid=103, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=5c8a6b08d0c711cac3e654805ec9be36, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,37181,1689218172183; forceNewPlan=false, retain=false 2023-07-13 03:16:30,580 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(1230): Checking to see if procedure is done pid=103 2023-07-13 03:16:30,673 INFO [jenkins-hbase20:33491] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-13 03:16:30,674 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=104 updating hbase:meta row=5c8a6b08d0c711cac3e654805ec9be36, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,37181,1689218172183 2023-07-13 03:16:30,674 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689218190473.5c8a6b08d0c711cac3e654805ec9be36.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689218190674"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689218190674"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689218190674"}]},"ts":"1689218190674"} 2023-07-13 03:16:30,676 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=105, ppid=104, state=RUNNABLE; OpenRegionProcedure 5c8a6b08d0c711cac3e654805ec9be36, server=jenkins-hbase20.apache.org,37181,1689218172183}] 2023-07-13 03:16:30,781 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(1230): Checking to see if procedure is done pid=103 2023-07-13 03:16:30,831 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveB,,1689218190473.5c8a6b08d0c711cac3e654805ec9be36. 2023-07-13 03:16:30,832 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 5c8a6b08d0c711cac3e654805ec9be36, NAME => 'GrouptestMultiTableMoveB,,1689218190473.5c8a6b08d0c711cac3e654805ec9be36.', STARTKEY => '', ENDKEY => ''} 2023-07-13 03:16:30,832 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveB 5c8a6b08d0c711cac3e654805ec9be36 2023-07-13 03:16:30,832 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689218190473.5c8a6b08d0c711cac3e654805ec9be36.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 03:16:30,832 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 5c8a6b08d0c711cac3e654805ec9be36 2023-07-13 03:16:30,832 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 5c8a6b08d0c711cac3e654805ec9be36 2023-07-13 03:16:30,833 INFO [StoreOpener-5c8a6b08d0c711cac3e654805ec9be36-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 5c8a6b08d0c711cac3e654805ec9be36 2023-07-13 03:16:30,835 DEBUG [StoreOpener-5c8a6b08d0c711cac3e654805ec9be36-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/GrouptestMultiTableMoveB/5c8a6b08d0c711cac3e654805ec9be36/f 2023-07-13 03:16:30,835 DEBUG [StoreOpener-5c8a6b08d0c711cac3e654805ec9be36-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/GrouptestMultiTableMoveB/5c8a6b08d0c711cac3e654805ec9be36/f 2023-07-13 03:16:30,835 INFO [StoreOpener-5c8a6b08d0c711cac3e654805ec9be36-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 5c8a6b08d0c711cac3e654805ec9be36 columnFamilyName f 2023-07-13 03:16:30,836 INFO [StoreOpener-5c8a6b08d0c711cac3e654805ec9be36-1] regionserver.HStore(310): Store=5c8a6b08d0c711cac3e654805ec9be36/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 03:16:30,837 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/GrouptestMultiTableMoveB/5c8a6b08d0c711cac3e654805ec9be36 2023-07-13 03:16:30,837 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/GrouptestMultiTableMoveB/5c8a6b08d0c711cac3e654805ec9be36 2023-07-13 03:16:30,840 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 5c8a6b08d0c711cac3e654805ec9be36 2023-07-13 03:16:30,842 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/GrouptestMultiTableMoveB/5c8a6b08d0c711cac3e654805ec9be36/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 03:16:30,843 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 5c8a6b08d0c711cac3e654805ec9be36; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11011028320, jitterRate=0.025481924414634705}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 03:16:30,843 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 5c8a6b08d0c711cac3e654805ec9be36: 2023-07-13 03:16:30,844 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveB,,1689218190473.5c8a6b08d0c711cac3e654805ec9be36., pid=105, masterSystemTime=1689218190828 2023-07-13 03:16:30,845 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveB,,1689218190473.5c8a6b08d0c711cac3e654805ec9be36. 2023-07-13 03:16:30,845 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveB,,1689218190473.5c8a6b08d0c711cac3e654805ec9be36. 2023-07-13 03:16:30,845 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=104 updating hbase:meta row=5c8a6b08d0c711cac3e654805ec9be36, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,37181,1689218172183 2023-07-13 03:16:30,845 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveB,,1689218190473.5c8a6b08d0c711cac3e654805ec9be36.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689218190845"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689218190845"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689218190845"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689218190845"}]},"ts":"1689218190845"} 2023-07-13 03:16:30,848 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=105, resume processing ppid=104 2023-07-13 03:16:30,849 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=105, ppid=104, state=SUCCESS; OpenRegionProcedure 5c8a6b08d0c711cac3e654805ec9be36, server=jenkins-hbase20.apache.org,37181,1689218172183 in 171 msec 2023-07-13 03:16:30,850 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=104, resume processing ppid=103 2023-07-13 03:16:30,851 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=104, ppid=103, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=5c8a6b08d0c711cac3e654805ec9be36, ASSIGN in 328 msec 2023-07-13 03:16:30,851 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=103, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-13 03:16:30,852 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689218190852"}]},"ts":"1689218190852"} 2023-07-13 03:16:30,855 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=ENABLED in hbase:meta 2023-07-13 03:16:30,857 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=103, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_POST_OPERATION 2023-07-13 03:16:30,858 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=103, state=SUCCESS; CreateTableProcedure table=GrouptestMultiTableMoveB in 383 msec 2023-07-13 03:16:31,083 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(1230): Checking to see if procedure is done pid=103 2023-07-13 03:16:31,083 INFO [Listener at localhost.localdomain/36261] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:GrouptestMultiTableMoveB, procId: 103 completed 2023-07-13 03:16:31,083 DEBUG [Listener at localhost.localdomain/36261] hbase.HBaseTestingUtility(3430): Waiting until all regions of table GrouptestMultiTableMoveB get assigned. Timeout = 60000ms 2023-07-13 03:16:31,083 INFO [Listener at localhost.localdomain/36261] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 03:16:31,088 INFO [Listener at localhost.localdomain/36261] hbase.HBaseTestingUtility(3484): All regions for table GrouptestMultiTableMoveB assigned to meta. Checking AM states. 2023-07-13 03:16:31,088 INFO [Listener at localhost.localdomain/36261] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 03:16:31,088 INFO [Listener at localhost.localdomain/36261] hbase.HBaseTestingUtility(3504): All regions for table GrouptestMultiTableMoveB assigned. 2023-07-13 03:16:31,089 INFO [Listener at localhost.localdomain/36261] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 03:16:31,102 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveA 2023-07-13 03:16:31,102 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-13 03:16:31,103 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveB 2023-07-13 03:16:31,103 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-13 03:16:31,103 INFO [Listener at localhost.localdomain/36261] rsgroup.TestRSGroupsAdmin1(262): Moving table [GrouptestMultiTableMoveA,GrouptestMultiTableMoveB] to Group_testMultiTableMove_1942187643 2023-07-13 03:16:31,107 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [GrouptestMultiTableMoveB, GrouptestMultiTableMoveA] to rsgroup Group_testMultiTableMove_1942187643 2023-07-13 03:16:31,110 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:31,110 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1942187643 2023-07-13 03:16:31,111 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 03:16:31,111 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 03:16:31,112 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(339): Moving region(s) for table GrouptestMultiTableMoveB to RSGroup Group_testMultiTableMove_1942187643 2023-07-13 03:16:31,112 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(345): Moving region 5c8a6b08d0c711cac3e654805ec9be36 to RSGroup Group_testMultiTableMove_1942187643 2023-07-13 03:16:31,113 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] procedure2.ProcedureExecutor(1029): Stored pid=106, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=5c8a6b08d0c711cac3e654805ec9be36, REOPEN/MOVE 2023-07-13 03:16:31,113 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(339): Moving region(s) for table GrouptestMultiTableMoveA to RSGroup Group_testMultiTableMove_1942187643 2023-07-13 03:16:31,116 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=106, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=5c8a6b08d0c711cac3e654805ec9be36, REOPEN/MOVE 2023-07-13 03:16:31,116 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(345): Moving region 874e08371a3a7899ac31bb3db63ba4fd to RSGroup Group_testMultiTableMove_1942187643 2023-07-13 03:16:31,118 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] procedure2.ProcedureExecutor(1029): Stored pid=107, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=874e08371a3a7899ac31bb3db63ba4fd, REOPEN/MOVE 2023-07-13 03:16:31,118 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=106 updating hbase:meta row=5c8a6b08d0c711cac3e654805ec9be36, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,37181,1689218172183 2023-07-13 03:16:31,118 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(286): Moving 2 region(s) to group Group_testMultiTableMove_1942187643, current retry=0 2023-07-13 03:16:31,118 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=107, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=874e08371a3a7899ac31bb3db63ba4fd, REOPEN/MOVE 2023-07-13 03:16:31,118 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689218190473.5c8a6b08d0c711cac3e654805ec9be36.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689218191118"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689218191118"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689218191118"}]},"ts":"1689218191118"} 2023-07-13 03:16:31,119 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=107 updating hbase:meta row=874e08371a3a7899ac31bb3db63ba4fd, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,44171,1689218172445 2023-07-13 03:16:31,119 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689218189848.874e08371a3a7899ac31bb3db63ba4fd.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689218191119"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689218191119"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689218191119"}]},"ts":"1689218191119"} 2023-07-13 03:16:31,120 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=108, ppid=106, state=RUNNABLE; CloseRegionProcedure 5c8a6b08d0c711cac3e654805ec9be36, server=jenkins-hbase20.apache.org,37181,1689218172183}] 2023-07-13 03:16:31,123 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=109, ppid=107, state=RUNNABLE; CloseRegionProcedure 874e08371a3a7899ac31bb3db63ba4fd, server=jenkins-hbase20.apache.org,44171,1689218172445}] 2023-07-13 03:16:31,274 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 5c8a6b08d0c711cac3e654805ec9be36 2023-07-13 03:16:31,275 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 5c8a6b08d0c711cac3e654805ec9be36, disabling compactions & flushes 2023-07-13 03:16:31,275 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689218190473.5c8a6b08d0c711cac3e654805ec9be36. 2023-07-13 03:16:31,275 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689218190473.5c8a6b08d0c711cac3e654805ec9be36. 2023-07-13 03:16:31,275 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689218190473.5c8a6b08d0c711cac3e654805ec9be36. after waiting 0 ms 2023-07-13 03:16:31,275 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689218190473.5c8a6b08d0c711cac3e654805ec9be36. 2023-07-13 03:16:31,277 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 874e08371a3a7899ac31bb3db63ba4fd 2023-07-13 03:16:31,278 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 874e08371a3a7899ac31bb3db63ba4fd, disabling compactions & flushes 2023-07-13 03:16:31,278 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689218189848.874e08371a3a7899ac31bb3db63ba4fd. 2023-07-13 03:16:31,278 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689218189848.874e08371a3a7899ac31bb3db63ba4fd. 2023-07-13 03:16:31,278 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689218189848.874e08371a3a7899ac31bb3db63ba4fd. after waiting 0 ms 2023-07-13 03:16:31,278 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689218189848.874e08371a3a7899ac31bb3db63ba4fd. 2023-07-13 03:16:31,281 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/GrouptestMultiTableMoveB/5c8a6b08d0c711cac3e654805ec9be36/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-13 03:16:31,282 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689218190473.5c8a6b08d0c711cac3e654805ec9be36. 2023-07-13 03:16:31,282 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 5c8a6b08d0c711cac3e654805ec9be36: 2023-07-13 03:16:31,282 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(3513): Adding 5c8a6b08d0c711cac3e654805ec9be36 move to jenkins-hbase20.apache.org,32993,1689218172776 record at close sequenceid=2 2023-07-13 03:16:31,282 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/GrouptestMultiTableMoveA/874e08371a3a7899ac31bb3db63ba4fd/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-13 03:16:31,283 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689218189848.874e08371a3a7899ac31bb3db63ba4fd. 2023-07-13 03:16:31,283 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 874e08371a3a7899ac31bb3db63ba4fd: 2023-07-13 03:16:31,283 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(3513): Adding 874e08371a3a7899ac31bb3db63ba4fd move to jenkins-hbase20.apache.org,32993,1689218172776 record at close sequenceid=2 2023-07-13 03:16:31,284 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 5c8a6b08d0c711cac3e654805ec9be36 2023-07-13 03:16:31,285 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=106 updating hbase:meta row=5c8a6b08d0c711cac3e654805ec9be36, regionState=CLOSED 2023-07-13 03:16:31,285 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689218190473.5c8a6b08d0c711cac3e654805ec9be36.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689218191285"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689218191285"}]},"ts":"1689218191285"} 2023-07-13 03:16:31,285 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 874e08371a3a7899ac31bb3db63ba4fd 2023-07-13 03:16:31,285 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=107 updating hbase:meta row=874e08371a3a7899ac31bb3db63ba4fd, regionState=CLOSED 2023-07-13 03:16:31,286 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689218189848.874e08371a3a7899ac31bb3db63ba4fd.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689218191285"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689218191285"}]},"ts":"1689218191285"} 2023-07-13 03:16:31,289 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=108, resume processing ppid=106 2023-07-13 03:16:31,289 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=108, ppid=106, state=SUCCESS; CloseRegionProcedure 5c8a6b08d0c711cac3e654805ec9be36, server=jenkins-hbase20.apache.org,37181,1689218172183 in 167 msec 2023-07-13 03:16:31,289 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=109, resume processing ppid=107 2023-07-13 03:16:31,289 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=109, ppid=107, state=SUCCESS; CloseRegionProcedure 874e08371a3a7899ac31bb3db63ba4fd, server=jenkins-hbase20.apache.org,44171,1689218172445 in 164 msec 2023-07-13 03:16:31,290 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=106, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=5c8a6b08d0c711cac3e654805ec9be36, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase20.apache.org,32993,1689218172776; forceNewPlan=false, retain=false 2023-07-13 03:16:31,290 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=107, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=874e08371a3a7899ac31bb3db63ba4fd, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase20.apache.org,32993,1689218172776; forceNewPlan=false, retain=false 2023-07-13 03:16:31,440 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=106 updating hbase:meta row=5c8a6b08d0c711cac3e654805ec9be36, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,32993,1689218172776 2023-07-13 03:16:31,440 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=107 updating hbase:meta row=874e08371a3a7899ac31bb3db63ba4fd, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,32993,1689218172776 2023-07-13 03:16:31,440 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689218190473.5c8a6b08d0c711cac3e654805ec9be36.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689218191440"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689218191440"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689218191440"}]},"ts":"1689218191440"} 2023-07-13 03:16:31,440 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689218189848.874e08371a3a7899ac31bb3db63ba4fd.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689218191440"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689218191440"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689218191440"}]},"ts":"1689218191440"} 2023-07-13 03:16:31,442 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=110, ppid=107, state=RUNNABLE; OpenRegionProcedure 874e08371a3a7899ac31bb3db63ba4fd, server=jenkins-hbase20.apache.org,32993,1689218172776}] 2023-07-13 03:16:31,442 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=111, ppid=106, state=RUNNABLE; OpenRegionProcedure 5c8a6b08d0c711cac3e654805ec9be36, server=jenkins-hbase20.apache.org,32993,1689218172776}] 2023-07-13 03:16:31,598 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveB,,1689218190473.5c8a6b08d0c711cac3e654805ec9be36. 2023-07-13 03:16:31,598 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 5c8a6b08d0c711cac3e654805ec9be36, NAME => 'GrouptestMultiTableMoveB,,1689218190473.5c8a6b08d0c711cac3e654805ec9be36.', STARTKEY => '', ENDKEY => ''} 2023-07-13 03:16:31,598 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveB 5c8a6b08d0c711cac3e654805ec9be36 2023-07-13 03:16:31,599 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689218190473.5c8a6b08d0c711cac3e654805ec9be36.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 03:16:31,599 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 5c8a6b08d0c711cac3e654805ec9be36 2023-07-13 03:16:31,599 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 5c8a6b08d0c711cac3e654805ec9be36 2023-07-13 03:16:31,601 INFO [StoreOpener-5c8a6b08d0c711cac3e654805ec9be36-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 5c8a6b08d0c711cac3e654805ec9be36 2023-07-13 03:16:31,602 DEBUG [StoreOpener-5c8a6b08d0c711cac3e654805ec9be36-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/GrouptestMultiTableMoveB/5c8a6b08d0c711cac3e654805ec9be36/f 2023-07-13 03:16:31,602 DEBUG [StoreOpener-5c8a6b08d0c711cac3e654805ec9be36-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/GrouptestMultiTableMoveB/5c8a6b08d0c711cac3e654805ec9be36/f 2023-07-13 03:16:31,603 INFO [StoreOpener-5c8a6b08d0c711cac3e654805ec9be36-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 5c8a6b08d0c711cac3e654805ec9be36 columnFamilyName f 2023-07-13 03:16:31,604 INFO [StoreOpener-5c8a6b08d0c711cac3e654805ec9be36-1] regionserver.HStore(310): Store=5c8a6b08d0c711cac3e654805ec9be36/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 03:16:31,605 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/GrouptestMultiTableMoveB/5c8a6b08d0c711cac3e654805ec9be36 2023-07-13 03:16:31,607 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/GrouptestMultiTableMoveB/5c8a6b08d0c711cac3e654805ec9be36 2023-07-13 03:16:31,609 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 5c8a6b08d0c711cac3e654805ec9be36 2023-07-13 03:16:31,610 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 5c8a6b08d0c711cac3e654805ec9be36; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9951028960, jitterRate=-0.07323820888996124}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 03:16:31,610 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 5c8a6b08d0c711cac3e654805ec9be36: 2023-07-13 03:16:31,611 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveB,,1689218190473.5c8a6b08d0c711cac3e654805ec9be36., pid=111, masterSystemTime=1689218191593 2023-07-13 03:16:31,612 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveB,,1689218190473.5c8a6b08d0c711cac3e654805ec9be36. 2023-07-13 03:16:31,612 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveB,,1689218190473.5c8a6b08d0c711cac3e654805ec9be36. 2023-07-13 03:16:31,612 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveA,,1689218189848.874e08371a3a7899ac31bb3db63ba4fd. 2023-07-13 03:16:31,612 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 874e08371a3a7899ac31bb3db63ba4fd, NAME => 'GrouptestMultiTableMoveA,,1689218189848.874e08371a3a7899ac31bb3db63ba4fd.', STARTKEY => '', ENDKEY => ''} 2023-07-13 03:16:31,612 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=106 updating hbase:meta row=5c8a6b08d0c711cac3e654805ec9be36, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase20.apache.org,32993,1689218172776 2023-07-13 03:16:31,612 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveB,,1689218190473.5c8a6b08d0c711cac3e654805ec9be36.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689218191612"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689218191612"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689218191612"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689218191612"}]},"ts":"1689218191612"} 2023-07-13 03:16:31,612 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveA 874e08371a3a7899ac31bb3db63ba4fd 2023-07-13 03:16:31,613 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689218189848.874e08371a3a7899ac31bb3db63ba4fd.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 03:16:31,613 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 874e08371a3a7899ac31bb3db63ba4fd 2023-07-13 03:16:31,613 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 874e08371a3a7899ac31bb3db63ba4fd 2023-07-13 03:16:31,614 INFO [StoreOpener-874e08371a3a7899ac31bb3db63ba4fd-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 874e08371a3a7899ac31bb3db63ba4fd 2023-07-13 03:16:31,615 DEBUG [StoreOpener-874e08371a3a7899ac31bb3db63ba4fd-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/GrouptestMultiTableMoveA/874e08371a3a7899ac31bb3db63ba4fd/f 2023-07-13 03:16:31,615 DEBUG [StoreOpener-874e08371a3a7899ac31bb3db63ba4fd-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/GrouptestMultiTableMoveA/874e08371a3a7899ac31bb3db63ba4fd/f 2023-07-13 03:16:31,615 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=111, resume processing ppid=106 2023-07-13 03:16:31,615 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=111, ppid=106, state=SUCCESS; OpenRegionProcedure 5c8a6b08d0c711cac3e654805ec9be36, server=jenkins-hbase20.apache.org,32993,1689218172776 in 172 msec 2023-07-13 03:16:31,616 INFO [StoreOpener-874e08371a3a7899ac31bb3db63ba4fd-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 874e08371a3a7899ac31bb3db63ba4fd columnFamilyName f 2023-07-13 03:16:31,616 INFO [StoreOpener-874e08371a3a7899ac31bb3db63ba4fd-1] regionserver.HStore(310): Store=874e08371a3a7899ac31bb3db63ba4fd/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 03:16:31,617 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=106, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=5c8a6b08d0c711cac3e654805ec9be36, REOPEN/MOVE in 503 msec 2023-07-13 03:16:31,617 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/GrouptestMultiTableMoveA/874e08371a3a7899ac31bb3db63ba4fd 2023-07-13 03:16:31,618 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/GrouptestMultiTableMoveA/874e08371a3a7899ac31bb3db63ba4fd 2023-07-13 03:16:31,622 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 874e08371a3a7899ac31bb3db63ba4fd 2023-07-13 03:16:31,623 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 874e08371a3a7899ac31bb3db63ba4fd; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9829848640, jitterRate=-0.084524005651474}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 03:16:31,623 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 874e08371a3a7899ac31bb3db63ba4fd: 2023-07-13 03:16:31,623 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveA,,1689218189848.874e08371a3a7899ac31bb3db63ba4fd., pid=110, masterSystemTime=1689218191593 2023-07-13 03:16:31,625 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveA,,1689218189848.874e08371a3a7899ac31bb3db63ba4fd. 2023-07-13 03:16:31,625 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveA,,1689218189848.874e08371a3a7899ac31bb3db63ba4fd. 2023-07-13 03:16:31,625 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=107 updating hbase:meta row=874e08371a3a7899ac31bb3db63ba4fd, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase20.apache.org,32993,1689218172776 2023-07-13 03:16:31,625 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveA,,1689218189848.874e08371a3a7899ac31bb3db63ba4fd.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689218191625"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689218191625"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689218191625"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689218191625"}]},"ts":"1689218191625"} 2023-07-13 03:16:31,628 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=110, resume processing ppid=107 2023-07-13 03:16:31,628 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=110, ppid=107, state=SUCCESS; OpenRegionProcedure 874e08371a3a7899ac31bb3db63ba4fd, server=jenkins-hbase20.apache.org,32993,1689218172776 in 185 msec 2023-07-13 03:16:31,629 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=107, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=874e08371a3a7899ac31bb3db63ba4fd, REOPEN/MOVE in 511 msec 2023-07-13 03:16:32,014 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-13 03:16:32,118 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] procedure.ProcedureSyncWait(216): waitFor pid=106 2023-07-13 03:16:32,118 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(369): All regions from table(s) [GrouptestMultiTableMoveB, GrouptestMultiTableMoveA] moved to target group Group_testMultiTableMove_1942187643. 2023-07-13 03:16:32,118 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-13 03:16:32,122 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:32,122 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:32,125 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveA 2023-07-13 03:16:32,125 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-13 03:16:32,126 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveB 2023-07-13 03:16:32,126 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-13 03:16:32,127 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-13 03:16:32,127 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 03:16:32,128 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=Group_testMultiTableMove_1942187643 2023-07-13 03:16:32,128 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 03:16:32,130 INFO [Listener at localhost.localdomain/36261] client.HBaseAdmin$15(890): Started disable of GrouptestMultiTableMoveA 2023-07-13 03:16:32,130 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.HMaster$11(2418): Client=jenkins//148.251.75.209 disable GrouptestMultiTableMoveA 2023-07-13 03:16:32,131 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] procedure2.ProcedureExecutor(1029): Stored pid=112, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=GrouptestMultiTableMoveA 2023-07-13 03:16:32,134 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(1230): Checking to see if procedure is done pid=112 2023-07-13 03:16:32,135 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689218192135"}]},"ts":"1689218192135"} 2023-07-13 03:16:32,136 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=DISABLING in hbase:meta 2023-07-13 03:16:32,137 INFO [PEWorker-5] procedure.DisableTableProcedure(293): Set GrouptestMultiTableMoveA to state=DISABLING 2023-07-13 03:16:32,138 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=113, ppid=112, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=874e08371a3a7899ac31bb3db63ba4fd, UNASSIGN}] 2023-07-13 03:16:32,140 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=113, ppid=112, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=874e08371a3a7899ac31bb3db63ba4fd, UNASSIGN 2023-07-13 03:16:32,141 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=113 updating hbase:meta row=874e08371a3a7899ac31bb3db63ba4fd, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,32993,1689218172776 2023-07-13 03:16:32,141 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689218189848.874e08371a3a7899ac31bb3db63ba4fd.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689218192141"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689218192141"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689218192141"}]},"ts":"1689218192141"} 2023-07-13 03:16:32,142 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=114, ppid=113, state=RUNNABLE; CloseRegionProcedure 874e08371a3a7899ac31bb3db63ba4fd, server=jenkins-hbase20.apache.org,32993,1689218172776}] 2023-07-13 03:16:32,236 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(1230): Checking to see if procedure is done pid=112 2023-07-13 03:16:32,295 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 874e08371a3a7899ac31bb3db63ba4fd 2023-07-13 03:16:32,296 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 874e08371a3a7899ac31bb3db63ba4fd, disabling compactions & flushes 2023-07-13 03:16:32,296 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689218189848.874e08371a3a7899ac31bb3db63ba4fd. 2023-07-13 03:16:32,296 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689218189848.874e08371a3a7899ac31bb3db63ba4fd. 2023-07-13 03:16:32,296 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689218189848.874e08371a3a7899ac31bb3db63ba4fd. after waiting 0 ms 2023-07-13 03:16:32,296 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689218189848.874e08371a3a7899ac31bb3db63ba4fd. 2023-07-13 03:16:32,302 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/GrouptestMultiTableMoveA/874e08371a3a7899ac31bb3db63ba4fd/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-13 03:16:32,304 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689218189848.874e08371a3a7899ac31bb3db63ba4fd. 2023-07-13 03:16:32,304 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 874e08371a3a7899ac31bb3db63ba4fd: 2023-07-13 03:16:32,306 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 874e08371a3a7899ac31bb3db63ba4fd 2023-07-13 03:16:32,311 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=113 updating hbase:meta row=874e08371a3a7899ac31bb3db63ba4fd, regionState=CLOSED 2023-07-13 03:16:32,311 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689218189848.874e08371a3a7899ac31bb3db63ba4fd.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689218192311"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689218192311"}]},"ts":"1689218192311"} 2023-07-13 03:16:32,314 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=114, resume processing ppid=113 2023-07-13 03:16:32,314 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=114, ppid=113, state=SUCCESS; CloseRegionProcedure 874e08371a3a7899ac31bb3db63ba4fd, server=jenkins-hbase20.apache.org,32993,1689218172776 in 171 msec 2023-07-13 03:16:32,316 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=113, resume processing ppid=112 2023-07-13 03:16:32,316 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=113, ppid=112, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=874e08371a3a7899ac31bb3db63ba4fd, UNASSIGN in 176 msec 2023-07-13 03:16:32,316 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689218192316"}]},"ts":"1689218192316"} 2023-07-13 03:16:32,317 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=DISABLED in hbase:meta 2023-07-13 03:16:32,321 INFO [PEWorker-5] procedure.DisableTableProcedure(305): Set GrouptestMultiTableMoveA to state=DISABLED 2023-07-13 03:16:32,323 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=112, state=SUCCESS; DisableTableProcedure table=GrouptestMultiTableMoveA in 192 msec 2023-07-13 03:16:32,437 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(1230): Checking to see if procedure is done pid=112 2023-07-13 03:16:32,437 INFO [Listener at localhost.localdomain/36261] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:GrouptestMultiTableMoveA, procId: 112 completed 2023-07-13 03:16:32,438 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.HMaster$5(2228): Client=jenkins//148.251.75.209 delete GrouptestMultiTableMoveA 2023-07-13 03:16:32,439 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] procedure2.ProcedureExecutor(1029): Stored pid=115, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-13 03:16:32,442 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=115, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-13 03:16:32,442 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'GrouptestMultiTableMoveA' from rsgroup 'Group_testMultiTableMove_1942187643' 2023-07-13 03:16:32,443 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=115, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-13 03:16:32,445 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:32,445 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1942187643 2023-07-13 03:16:32,448 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 03:16:32,449 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 03:16:32,449 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/GrouptestMultiTableMoveA/874e08371a3a7899ac31bb3db63ba4fd 2023-07-13 03:16:32,452 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/GrouptestMultiTableMoveA/874e08371a3a7899ac31bb3db63ba4fd/f, FileablePath, hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/GrouptestMultiTableMoveA/874e08371a3a7899ac31bb3db63ba4fd/recovered.edits] 2023-07-13 03:16:32,455 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(1230): Checking to see if procedure is done pid=115 2023-07-13 03:16:32,464 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/GrouptestMultiTableMoveA/874e08371a3a7899ac31bb3db63ba4fd/recovered.edits/7.seqid to hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/archive/data/default/GrouptestMultiTableMoveA/874e08371a3a7899ac31bb3db63ba4fd/recovered.edits/7.seqid 2023-07-13 03:16:32,465 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/GrouptestMultiTableMoveA/874e08371a3a7899ac31bb3db63ba4fd 2023-07-13 03:16:32,465 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveA regions 2023-07-13 03:16:32,468 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=115, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-13 03:16:32,470 WARN [PEWorker-2] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of GrouptestMultiTableMoveA from hbase:meta 2023-07-13 03:16:32,472 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(421): Removing 'GrouptestMultiTableMoveA' descriptor. 2023-07-13 03:16:32,474 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=115, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-13 03:16:32,474 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(411): Removing 'GrouptestMultiTableMoveA' from region states. 2023-07-13 03:16:32,474 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveA,,1689218189848.874e08371a3a7899ac31bb3db63ba4fd.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689218192474"}]},"ts":"9223372036854775807"} 2023-07-13 03:16:32,476 INFO [PEWorker-2] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-13 03:16:32,476 DEBUG [PEWorker-2] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 874e08371a3a7899ac31bb3db63ba4fd, NAME => 'GrouptestMultiTableMoveA,,1689218189848.874e08371a3a7899ac31bb3db63ba4fd.', STARTKEY => '', ENDKEY => ''}] 2023-07-13 03:16:32,476 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(415): Marking 'GrouptestMultiTableMoveA' as deleted. 2023-07-13 03:16:32,476 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689218192476"}]},"ts":"9223372036854775807"} 2023-07-13 03:16:32,478 INFO [PEWorker-2] hbase.MetaTableAccessor(1658): Deleted table GrouptestMultiTableMoveA state from META 2023-07-13 03:16:32,479 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(130): Finished pid=115, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-13 03:16:32,480 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=115, state=SUCCESS; DeleteTableProcedure table=GrouptestMultiTableMoveA in 41 msec 2023-07-13 03:16:32,557 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(1230): Checking to see if procedure is done pid=115 2023-07-13 03:16:32,557 INFO [Listener at localhost.localdomain/36261] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:GrouptestMultiTableMoveA, procId: 115 completed 2023-07-13 03:16:32,558 INFO [Listener at localhost.localdomain/36261] client.HBaseAdmin$15(890): Started disable of GrouptestMultiTableMoveB 2023-07-13 03:16:32,558 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.HMaster$11(2418): Client=jenkins//148.251.75.209 disable GrouptestMultiTableMoveB 2023-07-13 03:16:32,559 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] procedure2.ProcedureExecutor(1029): Stored pid=116, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=GrouptestMultiTableMoveB 2023-07-13 03:16:32,563 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(1230): Checking to see if procedure is done pid=116 2023-07-13 03:16:32,565 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689218192564"}]},"ts":"1689218192564"} 2023-07-13 03:16:32,566 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=DISABLING in hbase:meta 2023-07-13 03:16:32,568 INFO [PEWorker-3] procedure.DisableTableProcedure(293): Set GrouptestMultiTableMoveB to state=DISABLING 2023-07-13 03:16:32,569 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=117, ppid=116, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=5c8a6b08d0c711cac3e654805ec9be36, UNASSIGN}] 2023-07-13 03:16:32,571 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=117, ppid=116, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=5c8a6b08d0c711cac3e654805ec9be36, UNASSIGN 2023-07-13 03:16:32,572 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=5c8a6b08d0c711cac3e654805ec9be36, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,32993,1689218172776 2023-07-13 03:16:32,573 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689218190473.5c8a6b08d0c711cac3e654805ec9be36.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689218192572"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689218192572"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689218192572"}]},"ts":"1689218192572"} 2023-07-13 03:16:32,575 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=118, ppid=117, state=RUNNABLE; CloseRegionProcedure 5c8a6b08d0c711cac3e654805ec9be36, server=jenkins-hbase20.apache.org,32993,1689218172776}] 2023-07-13 03:16:32,624 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'GrouptestMultiTableMoveB' 2023-07-13 03:16:32,664 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(1230): Checking to see if procedure is done pid=116 2023-07-13 03:16:32,728 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 5c8a6b08d0c711cac3e654805ec9be36 2023-07-13 03:16:32,729 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 5c8a6b08d0c711cac3e654805ec9be36, disabling compactions & flushes 2023-07-13 03:16:32,729 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689218190473.5c8a6b08d0c711cac3e654805ec9be36. 2023-07-13 03:16:32,729 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689218190473.5c8a6b08d0c711cac3e654805ec9be36. 2023-07-13 03:16:32,729 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689218190473.5c8a6b08d0c711cac3e654805ec9be36. after waiting 0 ms 2023-07-13 03:16:32,729 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689218190473.5c8a6b08d0c711cac3e654805ec9be36. 2023-07-13 03:16:32,735 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/GrouptestMultiTableMoveB/5c8a6b08d0c711cac3e654805ec9be36/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-13 03:16:32,737 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689218190473.5c8a6b08d0c711cac3e654805ec9be36. 2023-07-13 03:16:32,737 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 5c8a6b08d0c711cac3e654805ec9be36: 2023-07-13 03:16:32,739 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 5c8a6b08d0c711cac3e654805ec9be36 2023-07-13 03:16:32,742 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=5c8a6b08d0c711cac3e654805ec9be36, regionState=CLOSED 2023-07-13 03:16:32,742 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689218190473.5c8a6b08d0c711cac3e654805ec9be36.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689218192742"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689218192742"}]},"ts":"1689218192742"} 2023-07-13 03:16:32,746 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=118, resume processing ppid=117 2023-07-13 03:16:32,746 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=118, ppid=117, state=SUCCESS; CloseRegionProcedure 5c8a6b08d0c711cac3e654805ec9be36, server=jenkins-hbase20.apache.org,32993,1689218172776 in 169 msec 2023-07-13 03:16:32,750 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=117, resume processing ppid=116 2023-07-13 03:16:32,750 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=117, ppid=116, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=5c8a6b08d0c711cac3e654805ec9be36, UNASSIGN in 178 msec 2023-07-13 03:16:32,752 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689218192752"}]},"ts":"1689218192752"} 2023-07-13 03:16:32,754 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=DISABLED in hbase:meta 2023-07-13 03:16:32,755 INFO [PEWorker-5] procedure.DisableTableProcedure(305): Set GrouptestMultiTableMoveB to state=DISABLED 2023-07-13 03:16:32,757 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=116, state=SUCCESS; DisableTableProcedure table=GrouptestMultiTableMoveB in 197 msec 2023-07-13 03:16:32,866 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(1230): Checking to see if procedure is done pid=116 2023-07-13 03:16:32,866 INFO [Listener at localhost.localdomain/36261] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:GrouptestMultiTableMoveB, procId: 116 completed 2023-07-13 03:16:32,867 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.HMaster$5(2228): Client=jenkins//148.251.75.209 delete GrouptestMultiTableMoveB 2023-07-13 03:16:32,868 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] procedure2.ProcedureExecutor(1029): Stored pid=119, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-13 03:16:32,869 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=119, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-13 03:16:32,870 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'GrouptestMultiTableMoveB' from rsgroup 'Group_testMultiTableMove_1942187643' 2023-07-13 03:16:32,870 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=119, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-13 03:16:32,872 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:32,873 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1942187643 2023-07-13 03:16:32,873 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 03:16:32,873 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 03:16:32,875 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/GrouptestMultiTableMoveB/5c8a6b08d0c711cac3e654805ec9be36 2023-07-13 03:16:32,875 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(1230): Checking to see if procedure is done pid=119 2023-07-13 03:16:32,877 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/GrouptestMultiTableMoveB/5c8a6b08d0c711cac3e654805ec9be36/f, FileablePath, hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/GrouptestMultiTableMoveB/5c8a6b08d0c711cac3e654805ec9be36/recovered.edits] 2023-07-13 03:16:32,883 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/GrouptestMultiTableMoveB/5c8a6b08d0c711cac3e654805ec9be36/recovered.edits/7.seqid to hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/archive/data/default/GrouptestMultiTableMoveB/5c8a6b08d0c711cac3e654805ec9be36/recovered.edits/7.seqid 2023-07-13 03:16:32,883 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/GrouptestMultiTableMoveB/5c8a6b08d0c711cac3e654805ec9be36 2023-07-13 03:16:32,884 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveB regions 2023-07-13 03:16:32,888 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=119, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-13 03:16:32,890 WARN [PEWorker-1] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of GrouptestMultiTableMoveB from hbase:meta 2023-07-13 03:16:32,892 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(421): Removing 'GrouptestMultiTableMoveB' descriptor. 2023-07-13 03:16:32,893 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=119, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-13 03:16:32,893 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(411): Removing 'GrouptestMultiTableMoveB' from region states. 2023-07-13 03:16:32,894 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveB,,1689218190473.5c8a6b08d0c711cac3e654805ec9be36.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689218192894"}]},"ts":"9223372036854775807"} 2023-07-13 03:16:32,902 INFO [PEWorker-1] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-13 03:16:32,902 DEBUG [PEWorker-1] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 5c8a6b08d0c711cac3e654805ec9be36, NAME => 'GrouptestMultiTableMoveB,,1689218190473.5c8a6b08d0c711cac3e654805ec9be36.', STARTKEY => '', ENDKEY => ''}] 2023-07-13 03:16:32,902 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(415): Marking 'GrouptestMultiTableMoveB' as deleted. 2023-07-13 03:16:32,902 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689218192902"}]},"ts":"9223372036854775807"} 2023-07-13 03:16:32,904 INFO [PEWorker-1] hbase.MetaTableAccessor(1658): Deleted table GrouptestMultiTableMoveB state from META 2023-07-13 03:16:32,906 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(130): Finished pid=119, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-13 03:16:32,910 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=119, state=SUCCESS; DeleteTableProcedure table=GrouptestMultiTableMoveB in 40 msec 2023-07-13 03:16:32,977 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(1230): Checking to see if procedure is done pid=119 2023-07-13 03:16:32,977 INFO [Listener at localhost.localdomain/36261] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:GrouptestMultiTableMoveB, procId: 119 completed 2023-07-13 03:16:32,980 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:32,980 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:32,981 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-13 03:16:32,982 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 03:16:32,982 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-13 03:16:32,983 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:32993] to rsgroup default 2023-07-13 03:16:32,985 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:32,986 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1942187643 2023-07-13 03:16:32,986 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 03:16:32,987 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 03:16:32,989 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testMultiTableMove_1942187643, current retry=0 2023-07-13 03:16:32,990 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase20.apache.org,32993,1689218172776] are moved back to Group_testMultiTableMove_1942187643 2023-07-13 03:16:32,990 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testMultiTableMove_1942187643 => default 2023-07-13 03:16:32,990 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-13 03:16:32,991 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup Group_testMultiTableMove_1942187643 2023-07-13 03:16:33,000 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:33,001 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 03:16:33,007 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-13 03:16:33,009 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 03:16:33,010 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-13 03:16:33,010 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 03:16:33,010 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-13 03:16:33,011 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-13 03:16:33,011 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-13 03:16:33,012 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup master 2023-07-13 03:16:33,016 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:33,017 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-13 03:16:33,021 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 03:16:33,027 INFO [Listener at localhost.localdomain/36261] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-13 03:16:33,028 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-13 03:16:33,030 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:33,031 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 03:16:33,032 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 03:16:33,033 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 03:16:33,055 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:33,055 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:33,058 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:33491] to rsgroup master 2023-07-13 03:16:33,060 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33491 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 03:16:33,061 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] ipc.CallRunner(144): callId: 511 service: MasterService methodName: ExecMasterService size: 120 connection: 148.251.75.209:45566 deadline: 1689219393058, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33491 is either offline or it does not exist. 2023-07-13 03:16:33,061 WARN [Listener at localhost.localdomain/36261] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33491 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33491 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-13 03:16:33,063 INFO [Listener at localhost.localdomain/36261] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 03:16:33,065 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:33,066 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:33,066 INFO [Listener at localhost.localdomain/36261] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:32993, jenkins-hbase20.apache.org:37181, jenkins-hbase20.apache.org:44171, jenkins-hbase20.apache.org:44325], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-13 03:16:33,067 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-13 03:16:33,068 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 03:16:33,086 INFO [Listener at localhost.localdomain/36261] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testMultiTableMove Thread=518 (was 522), OpenFileDescriptor=822 (was 824), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=524 (was 500) - SystemLoadAverage LEAK? -, ProcessCount=173 (was 170) - ProcessCount LEAK? -, AvailableMemoryMB=3996 (was 3526) - AvailableMemoryMB LEAK? - 2023-07-13 03:16:33,086 WARN [Listener at localhost.localdomain/36261] hbase.ResourceChecker(130): Thread=518 is superior to 500 2023-07-13 03:16:33,105 INFO [Listener at localhost.localdomain/36261] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRenameRSGroupConstraints Thread=518, OpenFileDescriptor=822, MaxFileDescriptor=60000, SystemLoadAverage=524, ProcessCount=173, AvailableMemoryMB=3992 2023-07-13 03:16:33,105 WARN [Listener at localhost.localdomain/36261] hbase.ResourceChecker(130): Thread=518 is superior to 500 2023-07-13 03:16:33,106 INFO [Listener at localhost.localdomain/36261] rsgroup.TestRSGroupsBase(132): testRenameRSGroupConstraints 2023-07-13 03:16:33,110 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:33,110 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:33,111 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-13 03:16:33,111 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 03:16:33,111 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-13 03:16:33,112 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-13 03:16:33,112 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-13 03:16:33,113 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup master 2023-07-13 03:16:33,116 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:33,117 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-13 03:16:33,118 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 03:16:33,121 INFO [Listener at localhost.localdomain/36261] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-13 03:16:33,122 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-13 03:16:33,124 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:33,124 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 03:16:33,131 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 03:16:33,138 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 03:16:33,156 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:33,156 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:33,158 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:33491] to rsgroup master 2023-07-13 03:16:33,158 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33491 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 03:16:33,158 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] ipc.CallRunner(144): callId: 539 service: MasterService methodName: ExecMasterService size: 120 connection: 148.251.75.209:45566 deadline: 1689219393158, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33491 is either offline or it does not exist. 2023-07-13 03:16:33,159 WARN [Listener at localhost.localdomain/36261] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33491 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33491 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-13 03:16:33,161 INFO [Listener at localhost.localdomain/36261] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 03:16:33,162 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:33,162 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:33,162 INFO [Listener at localhost.localdomain/36261] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:32993, jenkins-hbase20.apache.org:37181, jenkins-hbase20.apache.org:44171, jenkins-hbase20.apache.org:44325], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-13 03:16:33,163 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-13 03:16:33,163 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 03:16:33,164 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-13 03:16:33,164 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 03:16:33,166 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup oldGroup 2023-07-13 03:16:33,169 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:33,169 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-13 03:16:33,170 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 03:16:33,171 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 03:16:33,172 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 03:16:33,183 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:33,183 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:33,186 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:32993, jenkins-hbase20.apache.org:37181] to rsgroup oldGroup 2023-07-13 03:16:33,189 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:33,189 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-13 03:16:33,190 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 03:16:33,190 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 03:16:33,191 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-13 03:16:33,191 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase20.apache.org,32993,1689218172776, jenkins-hbase20.apache.org,37181,1689218172183] are moved back to default 2023-07-13 03:16:33,191 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(438): Move servers done: default => oldGroup 2023-07-13 03:16:33,191 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-13 03:16:33,198 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:33,198 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:33,201 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=oldGroup 2023-07-13 03:16:33,201 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 03:16:33,202 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=oldGroup 2023-07-13 03:16:33,202 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 03:16:33,203 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-13 03:16:33,204 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 03:16:33,204 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup anotherRSGroup 2023-07-13 03:16:33,207 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:33,208 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-13 03:16:33,209 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-13 03:16:33,210 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 03:16:33,211 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-13 03:16:33,212 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 03:16:33,218 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:33,218 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:33,222 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:44171] to rsgroup anotherRSGroup 2023-07-13 03:16:33,225 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:33,225 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-13 03:16:33,225 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-13 03:16:33,226 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 03:16:33,226 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-13 03:16:33,227 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-13 03:16:33,227 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase20.apache.org,44171,1689218172445] are moved back to default 2023-07-13 03:16:33,227 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(438): Move servers done: default => anotherRSGroup 2023-07-13 03:16:33,228 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-13 03:16:33,231 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:33,231 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:33,234 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=anotherRSGroup 2023-07-13 03:16:33,234 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 03:16:33,235 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=anotherRSGroup 2023-07-13 03:16:33,235 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 03:16:33,241 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//148.251.75.209 rename rsgroup from nonExistingRSGroup to newRSGroup1 2023-07-13 03:16:33,242 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup nonExistingRSGroup does not exist at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:407) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 03:16:33,242 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] ipc.CallRunner(144): callId: 573 service: MasterService methodName: ExecMasterService size: 113 connection: 148.251.75.209:45566 deadline: 1689219393240, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup nonExistingRSGroup does not exist 2023-07-13 03:16:33,243 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//148.251.75.209 rename rsgroup from oldGroup to anotherRSGroup 2023-07-13 03:16:33,243 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: anotherRSGroup at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:410) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 03:16:33,243 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] ipc.CallRunner(144): callId: 575 service: MasterService methodName: ExecMasterService size: 106 connection: 148.251.75.209:45566 deadline: 1689219393243, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: anotherRSGroup 2023-07-13 03:16:33,244 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//148.251.75.209 rename rsgroup from default to newRSGroup2 2023-07-13 03:16:33,244 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Can't rename default rsgroup at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:403) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 03:16:33,245 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] ipc.CallRunner(144): callId: 577 service: MasterService methodName: ExecMasterService size: 102 connection: 148.251.75.209:45566 deadline: 1689219393244, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Can't rename default rsgroup 2023-07-13 03:16:33,245 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//148.251.75.209 rename rsgroup from oldGroup to default 2023-07-13 03:16:33,245 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: default at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:410) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 03:16:33,245 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] ipc.CallRunner(144): callId: 579 service: MasterService methodName: ExecMasterService size: 99 connection: 148.251.75.209:45566 deadline: 1689219393245, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: default 2023-07-13 03:16:33,249 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:33,249 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:33,250 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-13 03:16:33,250 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 03:16:33,250 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-13 03:16:33,251 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:44171] to rsgroup default 2023-07-13 03:16:33,253 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:33,254 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-13 03:16:33,254 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-13 03:16:33,254 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 03:16:33,255 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-13 03:16:33,256 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group anotherRSGroup, current retry=0 2023-07-13 03:16:33,257 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase20.apache.org,44171,1689218172445] are moved back to anotherRSGroup 2023-07-13 03:16:33,257 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(438): Move servers done: anotherRSGroup => default 2023-07-13 03:16:33,257 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-13 03:16:33,258 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup anotherRSGroup 2023-07-13 03:16:33,262 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:33,262 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-13 03:16:33,262 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 03:16:33,264 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-13 03:16:33,272 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 03:16:33,273 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-13 03:16:33,273 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 03:16:33,273 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-13 03:16:33,274 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:32993, jenkins-hbase20.apache.org:37181] to rsgroup default 2023-07-13 03:16:33,277 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:33,277 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-13 03:16:33,277 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 03:16:33,278 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 03:16:33,279 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group oldGroup, current retry=0 2023-07-13 03:16:33,279 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase20.apache.org,32993,1689218172776, jenkins-hbase20.apache.org,37181,1689218172183] are moved back to oldGroup 2023-07-13 03:16:33,279 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(438): Move servers done: oldGroup => default 2023-07-13 03:16:33,279 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-13 03:16:33,280 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup oldGroup 2023-07-13 03:16:33,285 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:33,285 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 03:16:33,285 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-13 03:16:33,297 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 03:16:33,298 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-13 03:16:33,298 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 03:16:33,298 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-13 03:16:33,299 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-13 03:16:33,299 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-13 03:16:33,300 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup master 2023-07-13 03:16:33,304 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:33,304 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-13 03:16:33,305 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 03:16:33,307 INFO [Listener at localhost.localdomain/36261] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-13 03:16:33,308 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-13 03:16:33,310 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:33,310 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 03:16:33,313 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 03:16:33,314 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 03:16:33,317 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:33,318 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:33,320 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:33491] to rsgroup master 2023-07-13 03:16:33,321 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33491 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 03:16:33,321 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] ipc.CallRunner(144): callId: 615 service: MasterService methodName: ExecMasterService size: 120 connection: 148.251.75.209:45566 deadline: 1689219393320, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33491 is either offline or it does not exist. 2023-07-13 03:16:33,321 WARN [Listener at localhost.localdomain/36261] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33491 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33491 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-13 03:16:33,323 INFO [Listener at localhost.localdomain/36261] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 03:16:33,324 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:33,324 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:33,325 INFO [Listener at localhost.localdomain/36261] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:32993, jenkins-hbase20.apache.org:37181, jenkins-hbase20.apache.org:44171, jenkins-hbase20.apache.org:44325], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-13 03:16:33,325 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-13 03:16:33,325 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 03:16:33,343 INFO [Listener at localhost.localdomain/36261] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRenameRSGroupConstraints Thread=522 (was 518) Potentially hanging thread: hconnection-0x2cd1b0c2-shared-pool-17 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2cd1b0c2-shared-pool-18 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2cd1b0c2-shared-pool-20 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2cd1b0c2-shared-pool-19 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=822 (was 822), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=524 (was 524), ProcessCount=173 (was 173), AvailableMemoryMB=3932 (was 3992) 2023-07-13 03:16:33,344 WARN [Listener at localhost.localdomain/36261] hbase.ResourceChecker(130): Thread=522 is superior to 500 2023-07-13 03:16:33,363 INFO [Listener at localhost.localdomain/36261] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRenameRSGroup Thread=522, OpenFileDescriptor=822, MaxFileDescriptor=60000, SystemLoadAverage=524, ProcessCount=173, AvailableMemoryMB=3923 2023-07-13 03:16:33,364 WARN [Listener at localhost.localdomain/36261] hbase.ResourceChecker(130): Thread=522 is superior to 500 2023-07-13 03:16:33,364 INFO [Listener at localhost.localdomain/36261] rsgroup.TestRSGroupsBase(132): testRenameRSGroup 2023-07-13 03:16:33,375 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:33,375 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:33,376 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-13 03:16:33,376 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 03:16:33,376 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-13 03:16:33,377 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-13 03:16:33,377 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-13 03:16:33,377 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup master 2023-07-13 03:16:33,383 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:33,383 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-13 03:16:33,384 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 03:16:33,387 INFO [Listener at localhost.localdomain/36261] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-13 03:16:33,388 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-13 03:16:33,390 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:33,391 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 03:16:33,396 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 03:16:33,398 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 03:16:33,402 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:33,402 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:33,405 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:33491] to rsgroup master 2023-07-13 03:16:33,405 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33491 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 03:16:33,405 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] ipc.CallRunner(144): callId: 643 service: MasterService methodName: ExecMasterService size: 120 connection: 148.251.75.209:45566 deadline: 1689219393405, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33491 is either offline or it does not exist. 2023-07-13 03:16:33,405 WARN [Listener at localhost.localdomain/36261] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33491 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33491 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-13 03:16:33,407 INFO [Listener at localhost.localdomain/36261] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 03:16:33,408 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:33,408 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:33,408 INFO [Listener at localhost.localdomain/36261] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:32993, jenkins-hbase20.apache.org:37181, jenkins-hbase20.apache.org:44171, jenkins-hbase20.apache.org:44325], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-13 03:16:33,409 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-13 03:16:33,409 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 03:16:33,410 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-13 03:16:33,410 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 03:16:33,411 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup oldgroup 2023-07-13 03:16:33,413 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-13 03:16:33,414 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:33,415 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 03:16:33,415 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 03:16:33,416 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 03:16:33,420 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:33,420 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:33,422 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:32993, jenkins-hbase20.apache.org:37181] to rsgroup oldgroup 2023-07-13 03:16:33,425 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-13 03:16:33,425 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:33,425 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 03:16:33,426 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 03:16:33,427 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-13 03:16:33,427 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase20.apache.org,32993,1689218172776, jenkins-hbase20.apache.org,37181,1689218172183] are moved back to default 2023-07-13 03:16:33,427 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(438): Move servers done: default => oldgroup 2023-07-13 03:16:33,427 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-13 03:16:33,430 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:33,430 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:33,433 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=oldgroup 2023-07-13 03:16:33,433 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 03:16:33,435 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.HMaster$4(2112): Client=jenkins//148.251.75.209 create 'testRename', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'tr', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-13 03:16:33,436 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] procedure2.ProcedureExecutor(1029): Stored pid=120, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=testRename 2023-07-13 03:16:33,442 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_PRE_OPERATION 2023-07-13 03:16:33,443 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(700): Client=jenkins//148.251.75.209 procedure request for creating table: namespace: "default" qualifier: "testRename" procId is: 120 2023-07-13 03:16:33,444 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-13 03:16:33,445 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:33,445 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 03:16:33,446 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 03:16:33,449 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-13 03:16:33,449 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(1230): Checking to see if procedure is done pid=120 2023-07-13 03:16:33,451 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/testRename/98191189e1297e8d8e6d58f3c26a3bea 2023-07-13 03:16:33,451 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/testRename/98191189e1297e8d8e6d58f3c26a3bea empty. 2023-07-13 03:16:33,452 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/testRename/98191189e1297e8d8e6d58f3c26a3bea 2023-07-13 03:16:33,452 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived testRename regions 2023-07-13 03:16:33,470 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/testRename/.tabledesc/.tableinfo.0000000001 2023-07-13 03:16:33,472 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(7675): creating {ENCODED => 98191189e1297e8d8e6d58f3c26a3bea, NAME => 'testRename,,1689218193435.98191189e1297e8d8e6d58f3c26a3bea.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='testRename', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'tr', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp 2023-07-13 03:16:33,551 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(1230): Checking to see if procedure is done pid=120 2023-07-13 03:16:33,753 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(1230): Checking to see if procedure is done pid=120 2023-07-13 03:16:33,893 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(866): Instantiated testRename,,1689218193435.98191189e1297e8d8e6d58f3c26a3bea.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 03:16:33,893 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1604): Closing 98191189e1297e8d8e6d58f3c26a3bea, disabling compactions & flushes 2023-07-13 03:16:33,894 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1626): Closing region testRename,,1689218193435.98191189e1297e8d8e6d58f3c26a3bea. 2023-07-13 03:16:33,894 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689218193435.98191189e1297e8d8e6d58f3c26a3bea. 2023-07-13 03:16:33,894 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689218193435.98191189e1297e8d8e6d58f3c26a3bea. after waiting 0 ms 2023-07-13 03:16:33,894 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689218193435.98191189e1297e8d8e6d58f3c26a3bea. 2023-07-13 03:16:33,894 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1838): Closed testRename,,1689218193435.98191189e1297e8d8e6d58f3c26a3bea. 2023-07-13 03:16:33,894 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1558): Region close journal for 98191189e1297e8d8e6d58f3c26a3bea: 2023-07-13 03:16:33,897 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_ADD_TO_META 2023-07-13 03:16:33,898 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"testRename,,1689218193435.98191189e1297e8d8e6d58f3c26a3bea.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689218193898"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689218193898"}]},"ts":"1689218193898"} 2023-07-13 03:16:33,909 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-13 03:16:33,911 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-13 03:16:33,911 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"testRename","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689218193911"}]},"ts":"1689218193911"} 2023-07-13 03:16:33,914 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=testRename, state=ENABLING in hbase:meta 2023-07-13 03:16:33,922 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-13 03:16:33,922 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-13 03:16:33,922 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-13 03:16:33,923 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-13 03:16:33,923 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=121, ppid=120, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=testRename, region=98191189e1297e8d8e6d58f3c26a3bea, ASSIGN}] 2023-07-13 03:16:33,925 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=121, ppid=120, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=testRename, region=98191189e1297e8d8e6d58f3c26a3bea, ASSIGN 2023-07-13 03:16:33,926 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=121, ppid=120, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=98191189e1297e8d8e6d58f3c26a3bea, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,44325,1689218176275; forceNewPlan=false, retain=false 2023-07-13 03:16:34,054 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(1230): Checking to see if procedure is done pid=120 2023-07-13 03:16:34,077 INFO [jenkins-hbase20:33491] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-13 03:16:34,078 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=121 updating hbase:meta row=98191189e1297e8d8e6d58f3c26a3bea, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,44325,1689218176275 2023-07-13 03:16:34,078 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689218193435.98191189e1297e8d8e6d58f3c26a3bea.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689218194078"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689218194078"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689218194078"}]},"ts":"1689218194078"} 2023-07-13 03:16:34,082 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=122, ppid=121, state=RUNNABLE; OpenRegionProcedure 98191189e1297e8d8e6d58f3c26a3bea, server=jenkins-hbase20.apache.org,44325,1689218176275}] 2023-07-13 03:16:34,237 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open testRename,,1689218193435.98191189e1297e8d8e6d58f3c26a3bea. 2023-07-13 03:16:34,238 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 98191189e1297e8d8e6d58f3c26a3bea, NAME => 'testRename,,1689218193435.98191189e1297e8d8e6d58f3c26a3bea.', STARTKEY => '', ENDKEY => ''} 2023-07-13 03:16:34,238 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename 98191189e1297e8d8e6d58f3c26a3bea 2023-07-13 03:16:34,238 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated testRename,,1689218193435.98191189e1297e8d8e6d58f3c26a3bea.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 03:16:34,238 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 98191189e1297e8d8e6d58f3c26a3bea 2023-07-13 03:16:34,238 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 98191189e1297e8d8e6d58f3c26a3bea 2023-07-13 03:16:34,240 INFO [StoreOpener-98191189e1297e8d8e6d58f3c26a3bea-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region 98191189e1297e8d8e6d58f3c26a3bea 2023-07-13 03:16:34,241 DEBUG [StoreOpener-98191189e1297e8d8e6d58f3c26a3bea-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/testRename/98191189e1297e8d8e6d58f3c26a3bea/tr 2023-07-13 03:16:34,241 DEBUG [StoreOpener-98191189e1297e8d8e6d58f3c26a3bea-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/testRename/98191189e1297e8d8e6d58f3c26a3bea/tr 2023-07-13 03:16:34,242 INFO [StoreOpener-98191189e1297e8d8e6d58f3c26a3bea-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 98191189e1297e8d8e6d58f3c26a3bea columnFamilyName tr 2023-07-13 03:16:34,243 INFO [StoreOpener-98191189e1297e8d8e6d58f3c26a3bea-1] regionserver.HStore(310): Store=98191189e1297e8d8e6d58f3c26a3bea/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 03:16:34,244 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/testRename/98191189e1297e8d8e6d58f3c26a3bea 2023-07-13 03:16:34,244 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/testRename/98191189e1297e8d8e6d58f3c26a3bea 2023-07-13 03:16:34,247 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 98191189e1297e8d8e6d58f3c26a3bea 2023-07-13 03:16:34,249 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/testRename/98191189e1297e8d8e6d58f3c26a3bea/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 03:16:34,250 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 98191189e1297e8d8e6d58f3c26a3bea; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10497208000, jitterRate=-0.0223713219165802}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 03:16:34,250 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 98191189e1297e8d8e6d58f3c26a3bea: 2023-07-13 03:16:34,250 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689218193435.98191189e1297e8d8e6d58f3c26a3bea., pid=122, masterSystemTime=1689218194234 2023-07-13 03:16:34,252 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689218193435.98191189e1297e8d8e6d58f3c26a3bea. 2023-07-13 03:16:34,252 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689218193435.98191189e1297e8d8e6d58f3c26a3bea. 2023-07-13 03:16:34,252 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=121 updating hbase:meta row=98191189e1297e8d8e6d58f3c26a3bea, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,44325,1689218176275 2023-07-13 03:16:34,252 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689218193435.98191189e1297e8d8e6d58f3c26a3bea.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689218194252"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689218194252"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689218194252"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689218194252"}]},"ts":"1689218194252"} 2023-07-13 03:16:34,255 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=122, resume processing ppid=121 2023-07-13 03:16:34,256 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=122, ppid=121, state=SUCCESS; OpenRegionProcedure 98191189e1297e8d8e6d58f3c26a3bea, server=jenkins-hbase20.apache.org,44325,1689218176275 in 172 msec 2023-07-13 03:16:34,257 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=121, resume processing ppid=120 2023-07-13 03:16:34,257 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=121, ppid=120, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=98191189e1297e8d8e6d58f3c26a3bea, ASSIGN in 333 msec 2023-07-13 03:16:34,258 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-13 03:16:34,259 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"testRename","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689218194258"}]},"ts":"1689218194258"} 2023-07-13 03:16:34,260 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=testRename, state=ENABLED in hbase:meta 2023-07-13 03:16:34,263 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_POST_OPERATION 2023-07-13 03:16:34,266 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=120, state=SUCCESS; CreateTableProcedure table=testRename in 828 msec 2023-07-13 03:16:34,539 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'testRename' 2023-07-13 03:16:34,556 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(1230): Checking to see if procedure is done pid=120 2023-07-13 03:16:34,556 INFO [Listener at localhost.localdomain/36261] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:testRename, procId: 120 completed 2023-07-13 03:16:34,557 DEBUG [Listener at localhost.localdomain/36261] hbase.HBaseTestingUtility(3430): Waiting until all regions of table testRename get assigned. Timeout = 60000ms 2023-07-13 03:16:34,557 INFO [Listener at localhost.localdomain/36261] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 03:16:34,563 INFO [Listener at localhost.localdomain/36261] hbase.HBaseTestingUtility(3484): All regions for table testRename assigned to meta. Checking AM states. 2023-07-13 03:16:34,563 INFO [Listener at localhost.localdomain/36261] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 03:16:34,563 INFO [Listener at localhost.localdomain/36261] hbase.HBaseTestingUtility(3504): All regions for table testRename assigned. 2023-07-13 03:16:34,566 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [testRename] to rsgroup oldgroup 2023-07-13 03:16:34,574 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-13 03:16:34,574 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:34,575 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 03:16:34,575 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 03:16:34,576 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(339): Moving region(s) for table testRename to RSGroup oldgroup 2023-07-13 03:16:34,576 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(345): Moving region 98191189e1297e8d8e6d58f3c26a3bea to RSGroup oldgroup 2023-07-13 03:16:34,577 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-13 03:16:34,577 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-13 03:16:34,577 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-13 03:16:34,577 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-13 03:16:34,577 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-13 03:16:34,578 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] procedure2.ProcedureExecutor(1029): Stored pid=123, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=98191189e1297e8d8e6d58f3c26a3bea, REOPEN/MOVE 2023-07-13 03:16:34,578 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group oldgroup, current retry=0 2023-07-13 03:16:34,578 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=123, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=98191189e1297e8d8e6d58f3c26a3bea, REOPEN/MOVE 2023-07-13 03:16:34,579 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=98191189e1297e8d8e6d58f3c26a3bea, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,44325,1689218176275 2023-07-13 03:16:34,580 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689218193435.98191189e1297e8d8e6d58f3c26a3bea.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689218194579"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689218194579"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689218194579"}]},"ts":"1689218194579"} 2023-07-13 03:16:34,582 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=124, ppid=123, state=RUNNABLE; CloseRegionProcedure 98191189e1297e8d8e6d58f3c26a3bea, server=jenkins-hbase20.apache.org,44325,1689218176275}] 2023-07-13 03:16:34,734 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 98191189e1297e8d8e6d58f3c26a3bea 2023-07-13 03:16:34,735 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 98191189e1297e8d8e6d58f3c26a3bea, disabling compactions & flushes 2023-07-13 03:16:34,735 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region testRename,,1689218193435.98191189e1297e8d8e6d58f3c26a3bea. 2023-07-13 03:16:34,735 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689218193435.98191189e1297e8d8e6d58f3c26a3bea. 2023-07-13 03:16:34,735 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689218193435.98191189e1297e8d8e6d58f3c26a3bea. after waiting 0 ms 2023-07-13 03:16:34,735 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689218193435.98191189e1297e8d8e6d58f3c26a3bea. 2023-07-13 03:16:34,739 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/testRename/98191189e1297e8d8e6d58f3c26a3bea/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-13 03:16:34,740 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed testRename,,1689218193435.98191189e1297e8d8e6d58f3c26a3bea. 2023-07-13 03:16:34,740 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 98191189e1297e8d8e6d58f3c26a3bea: 2023-07-13 03:16:34,740 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(3513): Adding 98191189e1297e8d8e6d58f3c26a3bea move to jenkins-hbase20.apache.org,37181,1689218172183 record at close sequenceid=2 2023-07-13 03:16:34,742 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 98191189e1297e8d8e6d58f3c26a3bea 2023-07-13 03:16:34,743 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=98191189e1297e8d8e6d58f3c26a3bea, regionState=CLOSED 2023-07-13 03:16:34,743 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"testRename,,1689218193435.98191189e1297e8d8e6d58f3c26a3bea.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689218194743"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689218194743"}]},"ts":"1689218194743"} 2023-07-13 03:16:34,745 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=124, resume processing ppid=123 2023-07-13 03:16:34,745 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=124, ppid=123, state=SUCCESS; CloseRegionProcedure 98191189e1297e8d8e6d58f3c26a3bea, server=jenkins-hbase20.apache.org,44325,1689218176275 in 162 msec 2023-07-13 03:16:34,746 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=98191189e1297e8d8e6d58f3c26a3bea, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase20.apache.org,37181,1689218172183; forceNewPlan=false, retain=false 2023-07-13 03:16:34,896 INFO [jenkins-hbase20:33491] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-13 03:16:34,896 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=98191189e1297e8d8e6d58f3c26a3bea, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,37181,1689218172183 2023-07-13 03:16:34,897 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689218193435.98191189e1297e8d8e6d58f3c26a3bea.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689218194896"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689218194896"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689218194896"}]},"ts":"1689218194896"} 2023-07-13 03:16:34,898 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=125, ppid=123, state=RUNNABLE; OpenRegionProcedure 98191189e1297e8d8e6d58f3c26a3bea, server=jenkins-hbase20.apache.org,37181,1689218172183}] 2023-07-13 03:16:35,054 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open testRename,,1689218193435.98191189e1297e8d8e6d58f3c26a3bea. 2023-07-13 03:16:35,054 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 98191189e1297e8d8e6d58f3c26a3bea, NAME => 'testRename,,1689218193435.98191189e1297e8d8e6d58f3c26a3bea.', STARTKEY => '', ENDKEY => ''} 2023-07-13 03:16:35,055 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename 98191189e1297e8d8e6d58f3c26a3bea 2023-07-13 03:16:35,055 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated testRename,,1689218193435.98191189e1297e8d8e6d58f3c26a3bea.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 03:16:35,055 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 98191189e1297e8d8e6d58f3c26a3bea 2023-07-13 03:16:35,055 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 98191189e1297e8d8e6d58f3c26a3bea 2023-07-13 03:16:35,056 INFO [StoreOpener-98191189e1297e8d8e6d58f3c26a3bea-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region 98191189e1297e8d8e6d58f3c26a3bea 2023-07-13 03:16:35,057 DEBUG [StoreOpener-98191189e1297e8d8e6d58f3c26a3bea-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/testRename/98191189e1297e8d8e6d58f3c26a3bea/tr 2023-07-13 03:16:35,058 DEBUG [StoreOpener-98191189e1297e8d8e6d58f3c26a3bea-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/testRename/98191189e1297e8d8e6d58f3c26a3bea/tr 2023-07-13 03:16:35,058 INFO [StoreOpener-98191189e1297e8d8e6d58f3c26a3bea-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 98191189e1297e8d8e6d58f3c26a3bea columnFamilyName tr 2023-07-13 03:16:35,059 INFO [StoreOpener-98191189e1297e8d8e6d58f3c26a3bea-1] regionserver.HStore(310): Store=98191189e1297e8d8e6d58f3c26a3bea/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 03:16:35,060 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/testRename/98191189e1297e8d8e6d58f3c26a3bea 2023-07-13 03:16:35,061 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/testRename/98191189e1297e8d8e6d58f3c26a3bea 2023-07-13 03:16:35,064 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 98191189e1297e8d8e6d58f3c26a3bea 2023-07-13 03:16:35,065 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 98191189e1297e8d8e6d58f3c26a3bea; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11575639200, jitterRate=0.0780654102563858}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 03:16:35,065 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 98191189e1297e8d8e6d58f3c26a3bea: 2023-07-13 03:16:35,067 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689218193435.98191189e1297e8d8e6d58f3c26a3bea., pid=125, masterSystemTime=1689218195050 2023-07-13 03:16:35,069 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689218193435.98191189e1297e8d8e6d58f3c26a3bea. 2023-07-13 03:16:35,069 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689218193435.98191189e1297e8d8e6d58f3c26a3bea. 2023-07-13 03:16:35,069 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=98191189e1297e8d8e6d58f3c26a3bea, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase20.apache.org,37181,1689218172183 2023-07-13 03:16:35,069 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689218193435.98191189e1297e8d8e6d58f3c26a3bea.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689218195069"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689218195069"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689218195069"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689218195069"}]},"ts":"1689218195069"} 2023-07-13 03:16:35,073 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=125, resume processing ppid=123 2023-07-13 03:16:35,073 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=125, ppid=123, state=SUCCESS; OpenRegionProcedure 98191189e1297e8d8e6d58f3c26a3bea, server=jenkins-hbase20.apache.org,37181,1689218172183 in 173 msec 2023-07-13 03:16:35,074 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=123, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=98191189e1297e8d8e6d58f3c26a3bea, REOPEN/MOVE in 496 msec 2023-07-13 03:16:35,579 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] procedure.ProcedureSyncWait(216): waitFor pid=123 2023-07-13 03:16:35,579 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(369): All regions from table(s) [testRename] moved to target group oldgroup. 2023-07-13 03:16:35,579 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-13 03:16:35,582 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:35,582 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:35,585 INFO [Listener at localhost.localdomain/36261] hbase.Waiter(180): Waiting up to [1,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 03:16:35,586 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, table=testRename 2023-07-13 03:16:35,586 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-13 03:16:35,587 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=oldgroup 2023-07-13 03:16:35,587 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 03:16:35,589 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, table=testRename 2023-07-13 03:16:35,589 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-13 03:16:35,590 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-13 03:16:35,590 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 03:16:35,591 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup normal 2023-07-13 03:16:35,593 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-13 03:16:35,593 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-13 03:16:35,595 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:35,595 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 03:16:35,595 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-13 03:16:35,598 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 03:16:35,601 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:35,602 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:35,604 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:44171] to rsgroup normal 2023-07-13 03:16:35,607 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-13 03:16:35,608 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-13 03:16:35,608 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:35,609 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 03:16:35,609 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-13 03:16:35,613 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-13 03:16:35,613 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase20.apache.org,44171,1689218172445] are moved back to default 2023-07-13 03:16:35,613 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(438): Move servers done: default => normal 2023-07-13 03:16:35,613 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-13 03:16:35,617 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:35,617 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:35,620 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=normal 2023-07-13 03:16:35,620 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 03:16:35,623 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.HMaster$4(2112): Client=jenkins//148.251.75.209 create 'unmovedTable', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'ut', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-13 03:16:35,624 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] procedure2.ProcedureExecutor(1029): Stored pid=126, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=unmovedTable 2023-07-13 03:16:35,627 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=126, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_PRE_OPERATION 2023-07-13 03:16:35,630 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(700): Client=jenkins//148.251.75.209 procedure request for creating table: namespace: "default" qualifier: "unmovedTable" procId is: 126 2023-07-13 03:16:35,630 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(1230): Checking to see if procedure is done pid=126 2023-07-13 03:16:35,632 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-13 03:16:35,633 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-13 03:16:35,633 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:35,634 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 03:16:35,634 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-13 03:16:35,639 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=126, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-13 03:16:35,641 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/unmovedTable/184069995e46652ffc86537736197d8c 2023-07-13 03:16:35,642 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/unmovedTable/184069995e46652ffc86537736197d8c empty. 2023-07-13 03:16:35,642 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/unmovedTable/184069995e46652ffc86537736197d8c 2023-07-13 03:16:35,642 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived unmovedTable regions 2023-07-13 03:16:35,667 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/unmovedTable/.tabledesc/.tableinfo.0000000001 2023-07-13 03:16:35,671 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(7675): creating {ENCODED => 184069995e46652ffc86537736197d8c, NAME => 'unmovedTable,,1689218195622.184069995e46652ffc86537736197d8c.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='unmovedTable', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'ut', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp 2023-07-13 03:16:35,703 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689218195622.184069995e46652ffc86537736197d8c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 03:16:35,703 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1604): Closing 184069995e46652ffc86537736197d8c, disabling compactions & flushes 2023-07-13 03:16:35,703 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689218195622.184069995e46652ffc86537736197d8c. 2023-07-13 03:16:35,703 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689218195622.184069995e46652ffc86537736197d8c. 2023-07-13 03:16:35,703 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689218195622.184069995e46652ffc86537736197d8c. after waiting 0 ms 2023-07-13 03:16:35,703 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689218195622.184069995e46652ffc86537736197d8c. 2023-07-13 03:16:35,703 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1838): Closed unmovedTable,,1689218195622.184069995e46652ffc86537736197d8c. 2023-07-13 03:16:35,703 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1558): Region close journal for 184069995e46652ffc86537736197d8c: 2023-07-13 03:16:35,705 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=126, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_ADD_TO_META 2023-07-13 03:16:35,706 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"unmovedTable,,1689218195622.184069995e46652ffc86537736197d8c.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689218195706"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689218195706"}]},"ts":"1689218195706"} 2023-07-13 03:16:35,708 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-13 03:16:35,709 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=126, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-13 03:16:35,710 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"unmovedTable","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689218195709"}]},"ts":"1689218195709"} 2023-07-13 03:16:35,711 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=unmovedTable, state=ENABLING in hbase:meta 2023-07-13 03:16:35,714 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=127, ppid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=unmovedTable, region=184069995e46652ffc86537736197d8c, ASSIGN}] 2023-07-13 03:16:35,716 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=127, ppid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=unmovedTable, region=184069995e46652ffc86537736197d8c, ASSIGN 2023-07-13 03:16:35,717 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=127, ppid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=184069995e46652ffc86537736197d8c, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,44325,1689218176275; forceNewPlan=false, retain=false 2023-07-13 03:16:35,732 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(1230): Checking to see if procedure is done pid=126 2023-07-13 03:16:35,868 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=127 updating hbase:meta row=184069995e46652ffc86537736197d8c, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,44325,1689218176275 2023-07-13 03:16:35,869 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689218195622.184069995e46652ffc86537736197d8c.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689218195868"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689218195868"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689218195868"}]},"ts":"1689218195868"} 2023-07-13 03:16:35,870 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=128, ppid=127, state=RUNNABLE; OpenRegionProcedure 184069995e46652ffc86537736197d8c, server=jenkins-hbase20.apache.org,44325,1689218176275}] 2023-07-13 03:16:35,933 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(1230): Checking to see if procedure is done pid=126 2023-07-13 03:16:36,025 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689218195622.184069995e46652ffc86537736197d8c. 2023-07-13 03:16:36,025 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 184069995e46652ffc86537736197d8c, NAME => 'unmovedTable,,1689218195622.184069995e46652ffc86537736197d8c.', STARTKEY => '', ENDKEY => ''} 2023-07-13 03:16:36,025 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable 184069995e46652ffc86537736197d8c 2023-07-13 03:16:36,025 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689218195622.184069995e46652ffc86537736197d8c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 03:16:36,025 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 184069995e46652ffc86537736197d8c 2023-07-13 03:16:36,026 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 184069995e46652ffc86537736197d8c 2023-07-13 03:16:36,027 INFO [StoreOpener-184069995e46652ffc86537736197d8c-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region 184069995e46652ffc86537736197d8c 2023-07-13 03:16:36,028 DEBUG [StoreOpener-184069995e46652ffc86537736197d8c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/unmovedTable/184069995e46652ffc86537736197d8c/ut 2023-07-13 03:16:36,028 DEBUG [StoreOpener-184069995e46652ffc86537736197d8c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/unmovedTable/184069995e46652ffc86537736197d8c/ut 2023-07-13 03:16:36,028 INFO [StoreOpener-184069995e46652ffc86537736197d8c-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 184069995e46652ffc86537736197d8c columnFamilyName ut 2023-07-13 03:16:36,029 INFO [StoreOpener-184069995e46652ffc86537736197d8c-1] regionserver.HStore(310): Store=184069995e46652ffc86537736197d8c/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 03:16:36,029 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/unmovedTable/184069995e46652ffc86537736197d8c 2023-07-13 03:16:36,030 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/unmovedTable/184069995e46652ffc86537736197d8c 2023-07-13 03:16:36,032 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 184069995e46652ffc86537736197d8c 2023-07-13 03:16:36,034 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/unmovedTable/184069995e46652ffc86537736197d8c/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 03:16:36,034 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 184069995e46652ffc86537736197d8c; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9412645280, jitterRate=-0.12337909638881683}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 03:16:36,034 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 184069995e46652ffc86537736197d8c: 2023-07-13 03:16:36,035 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689218195622.184069995e46652ffc86537736197d8c., pid=128, masterSystemTime=1689218196021 2023-07-13 03:16:36,037 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689218195622.184069995e46652ffc86537736197d8c. 2023-07-13 03:16:36,037 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689218195622.184069995e46652ffc86537736197d8c. 2023-07-13 03:16:36,037 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=127 updating hbase:meta row=184069995e46652ffc86537736197d8c, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,44325,1689218176275 2023-07-13 03:16:36,037 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689218195622.184069995e46652ffc86537736197d8c.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689218196037"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689218196037"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689218196037"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689218196037"}]},"ts":"1689218196037"} 2023-07-13 03:16:36,040 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=128, resume processing ppid=127 2023-07-13 03:16:36,040 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=128, ppid=127, state=SUCCESS; OpenRegionProcedure 184069995e46652ffc86537736197d8c, server=jenkins-hbase20.apache.org,44325,1689218176275 in 169 msec 2023-07-13 03:16:36,042 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=127, resume processing ppid=126 2023-07-13 03:16:36,042 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=127, ppid=126, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=184069995e46652ffc86537736197d8c, ASSIGN in 326 msec 2023-07-13 03:16:36,042 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=126, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-13 03:16:36,043 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"unmovedTable","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689218196043"}]},"ts":"1689218196043"} 2023-07-13 03:16:36,044 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=unmovedTable, state=ENABLED in hbase:meta 2023-07-13 03:16:36,046 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=126, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_POST_OPERATION 2023-07-13 03:16:36,047 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=126, state=SUCCESS; CreateTableProcedure table=unmovedTable in 423 msec 2023-07-13 03:16:36,234 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(1230): Checking to see if procedure is done pid=126 2023-07-13 03:16:36,234 INFO [Listener at localhost.localdomain/36261] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:unmovedTable, procId: 126 completed 2023-07-13 03:16:36,234 DEBUG [Listener at localhost.localdomain/36261] hbase.HBaseTestingUtility(3430): Waiting until all regions of table unmovedTable get assigned. Timeout = 60000ms 2023-07-13 03:16:36,235 INFO [Listener at localhost.localdomain/36261] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 03:16:36,238 INFO [Listener at localhost.localdomain/36261] hbase.HBaseTestingUtility(3484): All regions for table unmovedTable assigned to meta. Checking AM states. 2023-07-13 03:16:36,238 INFO [Listener at localhost.localdomain/36261] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 03:16:36,239 INFO [Listener at localhost.localdomain/36261] hbase.HBaseTestingUtility(3504): All regions for table unmovedTable assigned. 2023-07-13 03:16:36,241 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [unmovedTable] to rsgroup normal 2023-07-13 03:16:36,243 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-13 03:16:36,243 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-13 03:16:36,244 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:36,244 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 03:16:36,244 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-13 03:16:36,245 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(339): Moving region(s) for table unmovedTable to RSGroup normal 2023-07-13 03:16:36,245 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(345): Moving region 184069995e46652ffc86537736197d8c to RSGroup normal 2023-07-13 03:16:36,246 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] procedure2.ProcedureExecutor(1029): Stored pid=129, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=184069995e46652ffc86537736197d8c, REOPEN/MOVE 2023-07-13 03:16:36,247 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group normal, current retry=0 2023-07-13 03:16:36,247 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=129, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=184069995e46652ffc86537736197d8c, REOPEN/MOVE 2023-07-13 03:16:36,247 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=184069995e46652ffc86537736197d8c, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,44325,1689218176275 2023-07-13 03:16:36,247 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689218195622.184069995e46652ffc86537736197d8c.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689218196247"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689218196247"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689218196247"}]},"ts":"1689218196247"} 2023-07-13 03:16:36,249 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=130, ppid=129, state=RUNNABLE; CloseRegionProcedure 184069995e46652ffc86537736197d8c, server=jenkins-hbase20.apache.org,44325,1689218176275}] 2023-07-13 03:16:36,403 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 184069995e46652ffc86537736197d8c 2023-07-13 03:16:36,404 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 184069995e46652ffc86537736197d8c, disabling compactions & flushes 2023-07-13 03:16:36,404 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689218195622.184069995e46652ffc86537736197d8c. 2023-07-13 03:16:36,404 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689218195622.184069995e46652ffc86537736197d8c. 2023-07-13 03:16:36,404 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689218195622.184069995e46652ffc86537736197d8c. after waiting 0 ms 2023-07-13 03:16:36,404 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689218195622.184069995e46652ffc86537736197d8c. 2023-07-13 03:16:36,411 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/unmovedTable/184069995e46652ffc86537736197d8c/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-13 03:16:36,412 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689218195622.184069995e46652ffc86537736197d8c. 2023-07-13 03:16:36,412 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 184069995e46652ffc86537736197d8c: 2023-07-13 03:16:36,412 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(3513): Adding 184069995e46652ffc86537736197d8c move to jenkins-hbase20.apache.org,44171,1689218172445 record at close sequenceid=2 2023-07-13 03:16:36,415 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 184069995e46652ffc86537736197d8c 2023-07-13 03:16:36,415 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=184069995e46652ffc86537736197d8c, regionState=CLOSED 2023-07-13 03:16:36,415 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"unmovedTable,,1689218195622.184069995e46652ffc86537736197d8c.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689218196415"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689218196415"}]},"ts":"1689218196415"} 2023-07-13 03:16:36,419 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=130, resume processing ppid=129 2023-07-13 03:16:36,419 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=130, ppid=129, state=SUCCESS; CloseRegionProcedure 184069995e46652ffc86537736197d8c, server=jenkins-hbase20.apache.org,44325,1689218176275 in 168 msec 2023-07-13 03:16:36,420 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=184069995e46652ffc86537736197d8c, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase20.apache.org,44171,1689218172445; forceNewPlan=false, retain=false 2023-07-13 03:16:36,570 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=184069995e46652ffc86537736197d8c, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,44171,1689218172445 2023-07-13 03:16:36,571 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689218195622.184069995e46652ffc86537736197d8c.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689218196570"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689218196570"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689218196570"}]},"ts":"1689218196570"} 2023-07-13 03:16:36,572 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=131, ppid=129, state=RUNNABLE; OpenRegionProcedure 184069995e46652ffc86537736197d8c, server=jenkins-hbase20.apache.org,44171,1689218172445}] 2023-07-13 03:16:36,728 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689218195622.184069995e46652ffc86537736197d8c. 2023-07-13 03:16:36,728 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 184069995e46652ffc86537736197d8c, NAME => 'unmovedTable,,1689218195622.184069995e46652ffc86537736197d8c.', STARTKEY => '', ENDKEY => ''} 2023-07-13 03:16:36,729 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable 184069995e46652ffc86537736197d8c 2023-07-13 03:16:36,729 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689218195622.184069995e46652ffc86537736197d8c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 03:16:36,729 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 184069995e46652ffc86537736197d8c 2023-07-13 03:16:36,729 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 184069995e46652ffc86537736197d8c 2023-07-13 03:16:36,733 INFO [StoreOpener-184069995e46652ffc86537736197d8c-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region 184069995e46652ffc86537736197d8c 2023-07-13 03:16:36,734 DEBUG [StoreOpener-184069995e46652ffc86537736197d8c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/unmovedTable/184069995e46652ffc86537736197d8c/ut 2023-07-13 03:16:36,734 DEBUG [StoreOpener-184069995e46652ffc86537736197d8c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/unmovedTable/184069995e46652ffc86537736197d8c/ut 2023-07-13 03:16:36,735 INFO [StoreOpener-184069995e46652ffc86537736197d8c-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 184069995e46652ffc86537736197d8c columnFamilyName ut 2023-07-13 03:16:36,736 INFO [StoreOpener-184069995e46652ffc86537736197d8c-1] regionserver.HStore(310): Store=184069995e46652ffc86537736197d8c/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 03:16:36,736 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/unmovedTable/184069995e46652ffc86537736197d8c 2023-07-13 03:16:36,738 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/unmovedTable/184069995e46652ffc86537736197d8c 2023-07-13 03:16:36,741 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 184069995e46652ffc86537736197d8c 2023-07-13 03:16:36,742 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 184069995e46652ffc86537736197d8c; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9423401120, jitterRate=-0.12237738072872162}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 03:16:36,742 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 184069995e46652ffc86537736197d8c: 2023-07-13 03:16:36,743 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689218195622.184069995e46652ffc86537736197d8c., pid=131, masterSystemTime=1689218196724 2023-07-13 03:16:36,745 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689218195622.184069995e46652ffc86537736197d8c. 2023-07-13 03:16:36,746 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689218195622.184069995e46652ffc86537736197d8c. 2023-07-13 03:16:36,746 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=184069995e46652ffc86537736197d8c, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase20.apache.org,44171,1689218172445 2023-07-13 03:16:36,746 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689218195622.184069995e46652ffc86537736197d8c.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689218196746"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689218196746"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689218196746"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689218196746"}]},"ts":"1689218196746"} 2023-07-13 03:16:36,749 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=131, resume processing ppid=129 2023-07-13 03:16:36,749 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=131, ppid=129, state=SUCCESS; OpenRegionProcedure 184069995e46652ffc86537736197d8c, server=jenkins-hbase20.apache.org,44171,1689218172445 in 175 msec 2023-07-13 03:16:36,750 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=129, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=184069995e46652ffc86537736197d8c, REOPEN/MOVE in 503 msec 2023-07-13 03:16:37,100 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-13 03:16:37,247 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] procedure.ProcedureSyncWait(216): waitFor pid=129 2023-07-13 03:16:37,247 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(369): All regions from table(s) [unmovedTable] moved to target group normal. 2023-07-13 03:16:37,247 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-13 03:16:37,251 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:37,251 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:37,254 INFO [Listener at localhost.localdomain/36261] hbase.Waiter(180): Waiting up to [1,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 03:16:37,255 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, table=unmovedTable 2023-07-13 03:16:37,255 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-13 03:16:37,256 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=normal 2023-07-13 03:16:37,256 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 03:16:37,257 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, table=unmovedTable 2023-07-13 03:16:37,257 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-13 03:16:37,258 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//148.251.75.209 rename rsgroup from oldgroup to newgroup 2023-07-13 03:16:37,260 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-13 03:16:37,261 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:37,261 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 03:16:37,261 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-13 03:16:37,262 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 9 2023-07-13 03:16:37,265 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RenameRSGroup 2023-07-13 03:16:37,269 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:37,269 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:37,271 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=newgroup 2023-07-13 03:16:37,271 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 03:16:37,272 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, table=testRename 2023-07-13 03:16:37,272 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-13 03:16:37,273 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, table=unmovedTable 2023-07-13 03:16:37,273 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-13 03:16:37,277 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:37,278 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:37,280 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [unmovedTable] to rsgroup default 2023-07-13 03:16:37,282 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-13 03:16:37,282 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:37,283 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 03:16:37,283 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-13 03:16:37,283 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-13 03:16:37,284 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(339): Moving region(s) for table unmovedTable to RSGroup default 2023-07-13 03:16:37,284 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(345): Moving region 184069995e46652ffc86537736197d8c to RSGroup default 2023-07-13 03:16:37,285 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] procedure2.ProcedureExecutor(1029): Stored pid=132, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=184069995e46652ffc86537736197d8c, REOPEN/MOVE 2023-07-13 03:16:37,285 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-13 03:16:37,285 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=132, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=184069995e46652ffc86537736197d8c, REOPEN/MOVE 2023-07-13 03:16:37,286 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=132 updating hbase:meta row=184069995e46652ffc86537736197d8c, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,44171,1689218172445 2023-07-13 03:16:37,286 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689218195622.184069995e46652ffc86537736197d8c.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689218197286"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689218197286"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689218197286"}]},"ts":"1689218197286"} 2023-07-13 03:16:37,287 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=133, ppid=132, state=RUNNABLE; CloseRegionProcedure 184069995e46652ffc86537736197d8c, server=jenkins-hbase20.apache.org,44171,1689218172445}] 2023-07-13 03:16:37,440 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 184069995e46652ffc86537736197d8c 2023-07-13 03:16:37,442 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 184069995e46652ffc86537736197d8c, disabling compactions & flushes 2023-07-13 03:16:37,442 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689218195622.184069995e46652ffc86537736197d8c. 2023-07-13 03:16:37,442 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689218195622.184069995e46652ffc86537736197d8c. 2023-07-13 03:16:37,442 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689218195622.184069995e46652ffc86537736197d8c. after waiting 0 ms 2023-07-13 03:16:37,442 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689218195622.184069995e46652ffc86537736197d8c. 2023-07-13 03:16:37,445 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/unmovedTable/184069995e46652ffc86537736197d8c/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-13 03:16:37,447 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689218195622.184069995e46652ffc86537736197d8c. 2023-07-13 03:16:37,447 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 184069995e46652ffc86537736197d8c: 2023-07-13 03:16:37,447 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(3513): Adding 184069995e46652ffc86537736197d8c move to jenkins-hbase20.apache.org,44325,1689218176275 record at close sequenceid=5 2023-07-13 03:16:37,449 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=132 updating hbase:meta row=184069995e46652ffc86537736197d8c, regionState=CLOSED 2023-07-13 03:16:37,450 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"unmovedTable,,1689218195622.184069995e46652ffc86537736197d8c.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689218197449"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689218197449"}]},"ts":"1689218197449"} 2023-07-13 03:16:37,450 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 184069995e46652ffc86537736197d8c 2023-07-13 03:16:37,453 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=133, resume processing ppid=132 2023-07-13 03:16:37,453 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=133, ppid=132, state=SUCCESS; CloseRegionProcedure 184069995e46652ffc86537736197d8c, server=jenkins-hbase20.apache.org,44171,1689218172445 in 164 msec 2023-07-13 03:16:37,454 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=184069995e46652ffc86537736197d8c, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase20.apache.org,44325,1689218176275; forceNewPlan=false, retain=false 2023-07-13 03:16:37,604 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=132 updating hbase:meta row=184069995e46652ffc86537736197d8c, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,44325,1689218176275 2023-07-13 03:16:37,604 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689218195622.184069995e46652ffc86537736197d8c.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689218197604"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689218197604"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689218197604"}]},"ts":"1689218197604"} 2023-07-13 03:16:37,606 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=134, ppid=132, state=RUNNABLE; OpenRegionProcedure 184069995e46652ffc86537736197d8c, server=jenkins-hbase20.apache.org,44325,1689218176275}] 2023-07-13 03:16:37,761 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689218195622.184069995e46652ffc86537736197d8c. 2023-07-13 03:16:37,761 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 184069995e46652ffc86537736197d8c, NAME => 'unmovedTable,,1689218195622.184069995e46652ffc86537736197d8c.', STARTKEY => '', ENDKEY => ''} 2023-07-13 03:16:37,762 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable 184069995e46652ffc86537736197d8c 2023-07-13 03:16:37,762 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689218195622.184069995e46652ffc86537736197d8c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 03:16:37,762 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 184069995e46652ffc86537736197d8c 2023-07-13 03:16:37,762 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 184069995e46652ffc86537736197d8c 2023-07-13 03:16:37,763 INFO [StoreOpener-184069995e46652ffc86537736197d8c-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region 184069995e46652ffc86537736197d8c 2023-07-13 03:16:37,764 DEBUG [StoreOpener-184069995e46652ffc86537736197d8c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/unmovedTable/184069995e46652ffc86537736197d8c/ut 2023-07-13 03:16:37,764 DEBUG [StoreOpener-184069995e46652ffc86537736197d8c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/unmovedTable/184069995e46652ffc86537736197d8c/ut 2023-07-13 03:16:37,764 INFO [StoreOpener-184069995e46652ffc86537736197d8c-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 184069995e46652ffc86537736197d8c columnFamilyName ut 2023-07-13 03:16:37,765 INFO [StoreOpener-184069995e46652ffc86537736197d8c-1] regionserver.HStore(310): Store=184069995e46652ffc86537736197d8c/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 03:16:37,766 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/unmovedTable/184069995e46652ffc86537736197d8c 2023-07-13 03:16:37,767 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/unmovedTable/184069995e46652ffc86537736197d8c 2023-07-13 03:16:37,770 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 184069995e46652ffc86537736197d8c 2023-07-13 03:16:37,771 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 184069995e46652ffc86537736197d8c; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10678606240, jitterRate=-0.005477294325828552}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 03:16:37,771 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 184069995e46652ffc86537736197d8c: 2023-07-13 03:16:37,772 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689218195622.184069995e46652ffc86537736197d8c., pid=134, masterSystemTime=1689218197758 2023-07-13 03:16:37,773 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689218195622.184069995e46652ffc86537736197d8c. 2023-07-13 03:16:37,773 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689218195622.184069995e46652ffc86537736197d8c. 2023-07-13 03:16:37,773 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=132 updating hbase:meta row=184069995e46652ffc86537736197d8c, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase20.apache.org,44325,1689218176275 2023-07-13 03:16:37,773 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689218195622.184069995e46652ffc86537736197d8c.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689218197773"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689218197773"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689218197773"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689218197773"}]},"ts":"1689218197773"} 2023-07-13 03:16:37,776 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=134, resume processing ppid=132 2023-07-13 03:16:37,776 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=134, ppid=132, state=SUCCESS; OpenRegionProcedure 184069995e46652ffc86537736197d8c, server=jenkins-hbase20.apache.org,44325,1689218176275 in 168 msec 2023-07-13 03:16:37,777 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=132, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=184069995e46652ffc86537736197d8c, REOPEN/MOVE in 491 msec 2023-07-13 03:16:38,286 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] procedure.ProcedureSyncWait(216): waitFor pid=132 2023-07-13 03:16:38,286 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(369): All regions from table(s) [unmovedTable] moved to target group default. 2023-07-13 03:16:38,286 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-13 03:16:38,287 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:44171] to rsgroup default 2023-07-13 03:16:38,289 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-13 03:16:38,289 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:38,290 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 03:16:38,290 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-13 03:16:38,290 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-13 03:16:38,291 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group normal, current retry=0 2023-07-13 03:16:38,291 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase20.apache.org,44171,1689218172445] are moved back to normal 2023-07-13 03:16:38,292 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(438): Move servers done: normal => default 2023-07-13 03:16:38,292 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-13 03:16:38,292 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup normal 2023-07-13 03:16:38,296 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:38,297 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 03:16:38,297 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-13 03:16:38,298 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-13 03:16:38,305 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 03:16:38,306 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-13 03:16:38,306 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 03:16:38,306 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-13 03:16:38,306 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-13 03:16:38,306 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-13 03:16:38,307 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup master 2023-07-13 03:16:38,310 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:38,311 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-13 03:16:38,311 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-13 03:16:38,313 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 03:16:38,315 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [testRename] to rsgroup default 2023-07-13 03:16:38,317 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:38,317 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-13 03:16:38,318 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 03:16:38,321 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(339): Moving region(s) for table testRename to RSGroup default 2023-07-13 03:16:38,321 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(345): Moving region 98191189e1297e8d8e6d58f3c26a3bea to RSGroup default 2023-07-13 03:16:38,322 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] procedure2.ProcedureExecutor(1029): Stored pid=135, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=98191189e1297e8d8e6d58f3c26a3bea, REOPEN/MOVE 2023-07-13 03:16:38,322 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-13 03:16:38,322 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=135, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=98191189e1297e8d8e6d58f3c26a3bea, REOPEN/MOVE 2023-07-13 03:16:38,323 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=135 updating hbase:meta row=98191189e1297e8d8e6d58f3c26a3bea, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,37181,1689218172183 2023-07-13 03:16:38,323 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689218193435.98191189e1297e8d8e6d58f3c26a3bea.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689218198323"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689218198323"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689218198323"}]},"ts":"1689218198323"} 2023-07-13 03:16:38,324 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=136, ppid=135, state=RUNNABLE; CloseRegionProcedure 98191189e1297e8d8e6d58f3c26a3bea, server=jenkins-hbase20.apache.org,37181,1689218172183}] 2023-07-13 03:16:38,477 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 98191189e1297e8d8e6d58f3c26a3bea 2023-07-13 03:16:38,478 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 98191189e1297e8d8e6d58f3c26a3bea, disabling compactions & flushes 2023-07-13 03:16:38,478 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region testRename,,1689218193435.98191189e1297e8d8e6d58f3c26a3bea. 2023-07-13 03:16:38,478 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689218193435.98191189e1297e8d8e6d58f3c26a3bea. 2023-07-13 03:16:38,478 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689218193435.98191189e1297e8d8e6d58f3c26a3bea. after waiting 0 ms 2023-07-13 03:16:38,478 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689218193435.98191189e1297e8d8e6d58f3c26a3bea. 2023-07-13 03:16:38,482 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/testRename/98191189e1297e8d8e6d58f3c26a3bea/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-13 03:16:38,483 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed testRename,,1689218193435.98191189e1297e8d8e6d58f3c26a3bea. 2023-07-13 03:16:38,483 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 98191189e1297e8d8e6d58f3c26a3bea: 2023-07-13 03:16:38,483 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(3513): Adding 98191189e1297e8d8e6d58f3c26a3bea move to jenkins-hbase20.apache.org,44171,1689218172445 record at close sequenceid=5 2023-07-13 03:16:38,484 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 98191189e1297e8d8e6d58f3c26a3bea 2023-07-13 03:16:38,485 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=135 updating hbase:meta row=98191189e1297e8d8e6d58f3c26a3bea, regionState=CLOSED 2023-07-13 03:16:38,485 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"testRename,,1689218193435.98191189e1297e8d8e6d58f3c26a3bea.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689218198485"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689218198485"}]},"ts":"1689218198485"} 2023-07-13 03:16:38,487 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=136, resume processing ppid=135 2023-07-13 03:16:38,487 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=136, ppid=135, state=SUCCESS; CloseRegionProcedure 98191189e1297e8d8e6d58f3c26a3bea, server=jenkins-hbase20.apache.org,37181,1689218172183 in 162 msec 2023-07-13 03:16:38,488 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=98191189e1297e8d8e6d58f3c26a3bea, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase20.apache.org,44171,1689218172445; forceNewPlan=false, retain=false 2023-07-13 03:16:38,638 INFO [jenkins-hbase20:33491] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-13 03:16:38,638 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=135 updating hbase:meta row=98191189e1297e8d8e6d58f3c26a3bea, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,44171,1689218172445 2023-07-13 03:16:38,638 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689218193435.98191189e1297e8d8e6d58f3c26a3bea.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689218198638"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689218198638"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689218198638"}]},"ts":"1689218198638"} 2023-07-13 03:16:38,640 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=137, ppid=135, state=RUNNABLE; OpenRegionProcedure 98191189e1297e8d8e6d58f3c26a3bea, server=jenkins-hbase20.apache.org,44171,1689218172445}] 2023-07-13 03:16:38,794 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open testRename,,1689218193435.98191189e1297e8d8e6d58f3c26a3bea. 2023-07-13 03:16:38,794 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 98191189e1297e8d8e6d58f3c26a3bea, NAME => 'testRename,,1689218193435.98191189e1297e8d8e6d58f3c26a3bea.', STARTKEY => '', ENDKEY => ''} 2023-07-13 03:16:38,794 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename 98191189e1297e8d8e6d58f3c26a3bea 2023-07-13 03:16:38,794 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated testRename,,1689218193435.98191189e1297e8d8e6d58f3c26a3bea.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 03:16:38,794 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 98191189e1297e8d8e6d58f3c26a3bea 2023-07-13 03:16:38,794 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 98191189e1297e8d8e6d58f3c26a3bea 2023-07-13 03:16:38,795 INFO [StoreOpener-98191189e1297e8d8e6d58f3c26a3bea-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region 98191189e1297e8d8e6d58f3c26a3bea 2023-07-13 03:16:38,796 DEBUG [StoreOpener-98191189e1297e8d8e6d58f3c26a3bea-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/testRename/98191189e1297e8d8e6d58f3c26a3bea/tr 2023-07-13 03:16:38,796 DEBUG [StoreOpener-98191189e1297e8d8e6d58f3c26a3bea-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/testRename/98191189e1297e8d8e6d58f3c26a3bea/tr 2023-07-13 03:16:38,797 INFO [StoreOpener-98191189e1297e8d8e6d58f3c26a3bea-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 98191189e1297e8d8e6d58f3c26a3bea columnFamilyName tr 2023-07-13 03:16:38,797 INFO [StoreOpener-98191189e1297e8d8e6d58f3c26a3bea-1] regionserver.HStore(310): Store=98191189e1297e8d8e6d58f3c26a3bea/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 03:16:38,798 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/testRename/98191189e1297e8d8e6d58f3c26a3bea 2023-07-13 03:16:38,799 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/testRename/98191189e1297e8d8e6d58f3c26a3bea 2023-07-13 03:16:38,801 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 98191189e1297e8d8e6d58f3c26a3bea 2023-07-13 03:16:38,802 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 98191189e1297e8d8e6d58f3c26a3bea; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11874206240, jitterRate=0.10587163269519806}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 03:16:38,802 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 98191189e1297e8d8e6d58f3c26a3bea: 2023-07-13 03:16:38,803 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689218193435.98191189e1297e8d8e6d58f3c26a3bea., pid=137, masterSystemTime=1689218198791 2023-07-13 03:16:38,804 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689218193435.98191189e1297e8d8e6d58f3c26a3bea. 2023-07-13 03:16:38,804 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689218193435.98191189e1297e8d8e6d58f3c26a3bea. 2023-07-13 03:16:38,804 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=135 updating hbase:meta row=98191189e1297e8d8e6d58f3c26a3bea, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase20.apache.org,44171,1689218172445 2023-07-13 03:16:38,805 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689218193435.98191189e1297e8d8e6d58f3c26a3bea.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689218198804"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689218198804"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689218198804"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689218198804"}]},"ts":"1689218198804"} 2023-07-13 03:16:38,807 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=137, resume processing ppid=135 2023-07-13 03:16:38,807 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=137, ppid=135, state=SUCCESS; OpenRegionProcedure 98191189e1297e8d8e6d58f3c26a3bea, server=jenkins-hbase20.apache.org,44171,1689218172445 in 166 msec 2023-07-13 03:16:38,808 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=135, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=98191189e1297e8d8e6d58f3c26a3bea, REOPEN/MOVE in 486 msec 2023-07-13 03:16:39,322 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] procedure.ProcedureSyncWait(216): waitFor pid=135 2023-07-13 03:16:39,323 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(369): All regions from table(s) [testRename] moved to target group default. 2023-07-13 03:16:39,323 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-13 03:16:39,324 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:32993, jenkins-hbase20.apache.org:37181] to rsgroup default 2023-07-13 03:16:39,326 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:39,327 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-13 03:16:39,327 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 03:16:39,330 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group newgroup, current retry=0 2023-07-13 03:16:39,330 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase20.apache.org,32993,1689218172776, jenkins-hbase20.apache.org,37181,1689218172183] are moved back to newgroup 2023-07-13 03:16:39,330 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(438): Move servers done: newgroup => default 2023-07-13 03:16:39,330 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-13 03:16:39,331 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup newgroup 2023-07-13 03:16:39,337 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:39,338 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-13 03:16:39,347 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 03:16:39,350 INFO [Listener at localhost.localdomain/36261] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-13 03:16:39,350 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-13 03:16:39,352 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:39,352 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 03:16:39,353 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 03:16:39,354 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 03:16:39,358 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:39,358 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:39,360 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:33491] to rsgroup master 2023-07-13 03:16:39,360 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33491 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 03:16:39,360 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] ipc.CallRunner(144): callId: 765 service: MasterService methodName: ExecMasterService size: 120 connection: 148.251.75.209:45566 deadline: 1689219399360, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33491 is either offline or it does not exist. 2023-07-13 03:16:39,360 WARN [Listener at localhost.localdomain/36261] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33491 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33491 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-13 03:16:39,362 INFO [Listener at localhost.localdomain/36261] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 03:16:39,362 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:39,363 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:39,363 INFO [Listener at localhost.localdomain/36261] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:32993, jenkins-hbase20.apache.org:37181, jenkins-hbase20.apache.org:44171, jenkins-hbase20.apache.org:44325], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-13 03:16:39,363 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-13 03:16:39,363 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 03:16:39,381 INFO [Listener at localhost.localdomain/36261] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRenameRSGroup Thread=517 (was 522), OpenFileDescriptor=801 (was 822), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=490 (was 524), ProcessCount=170 (was 173), AvailableMemoryMB=3575 (was 3923) 2023-07-13 03:16:39,381 WARN [Listener at localhost.localdomain/36261] hbase.ResourceChecker(130): Thread=517 is superior to 500 2023-07-13 03:16:39,399 INFO [Listener at localhost.localdomain/36261] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testBogusArgs Thread=517, OpenFileDescriptor=801, MaxFileDescriptor=60000, SystemLoadAverage=490, ProcessCount=170, AvailableMemoryMB=3573 2023-07-13 03:16:39,399 WARN [Listener at localhost.localdomain/36261] hbase.ResourceChecker(130): Thread=517 is superior to 500 2023-07-13 03:16:39,400 INFO [Listener at localhost.localdomain/36261] rsgroup.TestRSGroupsBase(132): testBogusArgs 2023-07-13 03:16:39,405 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:39,405 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:39,406 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-13 03:16:39,406 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 03:16:39,406 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-13 03:16:39,407 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-13 03:16:39,407 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-13 03:16:39,408 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup master 2023-07-13 03:16:39,411 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:39,412 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-13 03:16:39,412 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 03:16:39,415 INFO [Listener at localhost.localdomain/36261] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-13 03:16:39,415 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-13 03:16:39,417 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:39,418 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 03:16:39,421 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 03:16:39,422 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 03:16:39,425 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:39,425 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:39,426 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:33491] to rsgroup master 2023-07-13 03:16:39,427 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33491 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 03:16:39,427 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] ipc.CallRunner(144): callId: 793 service: MasterService methodName: ExecMasterService size: 120 connection: 148.251.75.209:45566 deadline: 1689219399426, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33491 is either offline or it does not exist. 2023-07-13 03:16:39,427 WARN [Listener at localhost.localdomain/36261] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33491 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33491 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-13 03:16:39,429 INFO [Listener at localhost.localdomain/36261] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 03:16:39,430 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:39,430 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:39,430 INFO [Listener at localhost.localdomain/36261] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:32993, jenkins-hbase20.apache.org:37181, jenkins-hbase20.apache.org:44171, jenkins-hbase20.apache.org:44325], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-13 03:16:39,431 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-13 03:16:39,431 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 03:16:39,432 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, table=nonexistent 2023-07-13 03:16:39,432 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-13 03:16:39,437 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(334): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, server=bogus:123 2023-07-13 03:16:39,437 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfoOfServer 2023-07-13 03:16:39,438 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=bogus 2023-07-13 03:16:39,438 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 03:16:39,439 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup bogus 2023-07-13 03:16:39,439 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bogus does not exist at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:486) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 03:16:39,439 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] ipc.CallRunner(144): callId: 805 service: MasterService methodName: ExecMasterService size: 87 connection: 148.251.75.209:45566 deadline: 1689219399439, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bogus does not exist 2023-07-13 03:16:39,441 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [bogus:123] to rsgroup bogus 2023-07-13 03:16:39,442 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.getAndCheckRSGroupInfo(RSGroupAdminServer.java:115) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:398) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 03:16:39,442 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] ipc.CallRunner(144): callId: 808 service: MasterService methodName: ExecMasterService size: 96 connection: 148.251.75.209:45566 deadline: 1689219399441, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus 2023-07-13 03:16:39,443 DEBUG [Listener at localhost.localdomain/36261-EventThread] zookeeper.ZKWatcher(600): master:33491-0x1008454350d0000, quorum=127.0.0.1:56998, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/balancer 2023-07-13 03:16:39,444 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(492): Client=jenkins//148.251.75.209 set balanceSwitch=true 2023-07-13 03:16:39,448 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(292): Client=jenkins//148.251.75.209 balance rsgroup, group=bogus 2023-07-13 03:16:39,449 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.balanceRSGroup(RSGroupAdminServer.java:523) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.balanceRSGroup(RSGroupAdminEndpoint.java:299) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16213) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 03:16:39,449 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] ipc.CallRunner(144): callId: 812 service: MasterService methodName: ExecMasterService size: 88 connection: 148.251.75.209:45566 deadline: 1689219399448, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus 2023-07-13 03:16:39,453 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:39,453 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:39,454 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-13 03:16:39,454 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 03:16:39,454 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-13 03:16:39,454 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-13 03:16:39,454 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-13 03:16:39,455 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup master 2023-07-13 03:16:39,458 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:39,459 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-13 03:16:39,459 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 03:16:39,462 INFO [Listener at localhost.localdomain/36261] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-13 03:16:39,462 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-13 03:16:39,464 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:39,464 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 03:16:39,471 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 03:16:39,472 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 03:16:39,477 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:39,477 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:39,479 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:33491] to rsgroup master 2023-07-13 03:16:39,482 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33491 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 03:16:39,482 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] ipc.CallRunner(144): callId: 836 service: MasterService methodName: ExecMasterService size: 120 connection: 148.251.75.209:45566 deadline: 1689219399479, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33491 is either offline or it does not exist. 2023-07-13 03:16:39,482 WARN [Listener at localhost.localdomain/36261] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33491 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33491 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-13 03:16:39,484 INFO [Listener at localhost.localdomain/36261] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 03:16:39,485 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:39,485 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:39,486 INFO [Listener at localhost.localdomain/36261] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:32993, jenkins-hbase20.apache.org:37181, jenkins-hbase20.apache.org:44171, jenkins-hbase20.apache.org:44325], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-13 03:16:39,486 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-13 03:16:39,487 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 03:16:39,504 INFO [Listener at localhost.localdomain/36261] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testBogusArgs Thread=521 (was 517) Potentially hanging thread: hconnection-0x2cd1b0c2-shared-pool-24 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2cd1b0c2-shared-pool-25 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x4b805d89-shared-pool-25 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x4b805d89-shared-pool-26 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=801 (was 801), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=490 (was 490), ProcessCount=170 (was 170), AvailableMemoryMB=3566 (was 3573) 2023-07-13 03:16:39,504 WARN [Listener at localhost.localdomain/36261] hbase.ResourceChecker(130): Thread=521 is superior to 500 2023-07-13 03:16:39,519 INFO [Listener at localhost.localdomain/36261] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testDisabledTableMove Thread=521, OpenFileDescriptor=801, MaxFileDescriptor=60000, SystemLoadAverage=490, ProcessCount=170, AvailableMemoryMB=3565 2023-07-13 03:16:39,519 WARN [Listener at localhost.localdomain/36261] hbase.ResourceChecker(130): Thread=521 is superior to 500 2023-07-13 03:16:39,519 INFO [Listener at localhost.localdomain/36261] rsgroup.TestRSGroupsBase(132): testDisabledTableMove 2023-07-13 03:16:39,524 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:39,524 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:39,525 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-13 03:16:39,525 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 03:16:39,525 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-13 03:16:39,526 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-13 03:16:39,526 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-13 03:16:39,526 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup master 2023-07-13 03:16:39,531 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:39,531 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-13 03:16:39,533 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 03:16:39,536 INFO [Listener at localhost.localdomain/36261] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-13 03:16:39,537 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-13 03:16:39,539 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:39,539 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 03:16:39,546 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 03:16:39,547 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 03:16:39,551 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:39,551 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:39,553 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:33491] to rsgroup master 2023-07-13 03:16:39,554 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33491 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 03:16:39,554 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] ipc.CallRunner(144): callId: 864 service: MasterService methodName: ExecMasterService size: 120 connection: 148.251.75.209:45566 deadline: 1689219399553, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33491 is either offline or it does not exist. 2023-07-13 03:16:39,554 WARN [Listener at localhost.localdomain/36261] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33491 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33491 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-13 03:16:39,556 INFO [Listener at localhost.localdomain/36261] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 03:16:39,557 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:39,557 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:39,558 INFO [Listener at localhost.localdomain/36261] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:32993, jenkins-hbase20.apache.org:37181, jenkins-hbase20.apache.org:44171, jenkins-hbase20.apache.org:44325], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-13 03:16:39,558 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-13 03:16:39,558 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 03:16:39,559 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-13 03:16:39,560 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 03:16:39,561 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup Group_testDisabledTableMove_1437575117 2023-07-13 03:16:39,563 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:39,563 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 03:16:39,564 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1437575117 2023-07-13 03:16:39,565 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 03:16:39,566 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 03:16:39,569 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:39,569 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:39,571 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:32993, jenkins-hbase20.apache.org:37181] to rsgroup Group_testDisabledTableMove_1437575117 2023-07-13 03:16:39,573 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:39,573 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1437575117 2023-07-13 03:16:39,573 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 03:16:39,574 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 03:16:39,580 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-13 03:16:39,580 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase20.apache.org,32993,1689218172776, jenkins-hbase20.apache.org,37181,1689218172183] are moved back to default 2023-07-13 03:16:39,580 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testDisabledTableMove_1437575117 2023-07-13 03:16:39,580 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-13 03:16:39,582 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:39,583 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:39,585 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=Group_testDisabledTableMove_1437575117 2023-07-13 03:16:39,585 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 03:16:39,587 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.HMaster$4(2112): Client=jenkins//148.251.75.209 create 'Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-13 03:16:39,587 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] procedure2.ProcedureExecutor(1029): Stored pid=138, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testDisabledTableMove 2023-07-13 03:16:39,589 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=138, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_PRE_OPERATION 2023-07-13 03:16:39,589 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(700): Client=jenkins//148.251.75.209 procedure request for creating table: namespace: "default" qualifier: "Group_testDisabledTableMove" procId is: 138 2023-07-13 03:16:39,590 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(1230): Checking to see if procedure is done pid=138 2023-07-13 03:16:39,591 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:39,591 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1437575117 2023-07-13 03:16:39,592 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 03:16:39,592 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 03:16:39,594 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=138, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-13 03:16:39,598 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testDisabledTableMove/996f457c67db58aa19e9f471dd814787 2023-07-13 03:16:39,598 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testDisabledTableMove/d97c33df5f6d021e6069ab84ad1303aa 2023-07-13 03:16:39,598 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testDisabledTableMove/cddcec05390828f144388bde2e4e27e4 2023-07-13 03:16:39,598 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testDisabledTableMove/342abb796ef4c6abe08761532a9a9e44 2023-07-13 03:16:39,598 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testDisabledTableMove/5877f4ba7f9253faa81517fb25ca27ab 2023-07-13 03:16:39,599 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testDisabledTableMove/996f457c67db58aa19e9f471dd814787 empty. 2023-07-13 03:16:39,599 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testDisabledTableMove/d97c33df5f6d021e6069ab84ad1303aa empty. 2023-07-13 03:16:39,599 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testDisabledTableMove/342abb796ef4c6abe08761532a9a9e44 empty. 2023-07-13 03:16:39,599 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testDisabledTableMove/cddcec05390828f144388bde2e4e27e4 empty. 2023-07-13 03:16:39,599 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testDisabledTableMove/5877f4ba7f9253faa81517fb25ca27ab empty. 2023-07-13 03:16:39,599 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testDisabledTableMove/996f457c67db58aa19e9f471dd814787 2023-07-13 03:16:39,599 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testDisabledTableMove/d97c33df5f6d021e6069ab84ad1303aa 2023-07-13 03:16:39,599 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testDisabledTableMove/342abb796ef4c6abe08761532a9a9e44 2023-07-13 03:16:39,599 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testDisabledTableMove/cddcec05390828f144388bde2e4e27e4 2023-07-13 03:16:39,599 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testDisabledTableMove/5877f4ba7f9253faa81517fb25ca27ab 2023-07-13 03:16:39,599 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived Group_testDisabledTableMove regions 2023-07-13 03:16:39,610 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testDisabledTableMove/.tabledesc/.tableinfo.0000000001 2023-07-13 03:16:39,611 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(7675): creating {ENCODED => 996f457c67db58aa19e9f471dd814787, NAME => 'Group_testDisabledTableMove,,1689218199586.996f457c67db58aa19e9f471dd814787.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp 2023-07-13 03:16:39,611 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(7675): creating {ENCODED => 342abb796ef4c6abe08761532a9a9e44, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689218199586.342abb796ef4c6abe08761532a9a9e44.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp 2023-07-13 03:16:39,611 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(7675): creating {ENCODED => 5877f4ba7f9253faa81517fb25ca27ab, NAME => 'Group_testDisabledTableMove,aaaaa,1689218199586.5877f4ba7f9253faa81517fb25ca27ab.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp 2023-07-13 03:16:39,621 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,,1689218199586.996f457c67db58aa19e9f471dd814787.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 03:16:39,621 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1604): Closing 996f457c67db58aa19e9f471dd814787, disabling compactions & flushes 2023-07-13 03:16:39,621 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,,1689218199586.996f457c67db58aa19e9f471dd814787. 2023-07-13 03:16:39,621 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,,1689218199586.996f457c67db58aa19e9f471dd814787. 2023-07-13 03:16:39,621 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,,1689218199586.996f457c67db58aa19e9f471dd814787. after waiting 0 ms 2023-07-13 03:16:39,621 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,,1689218199586.996f457c67db58aa19e9f471dd814787. 2023-07-13 03:16:39,621 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,,1689218199586.996f457c67db58aa19e9f471dd814787. 2023-07-13 03:16:39,621 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1558): Region close journal for 996f457c67db58aa19e9f471dd814787: 2023-07-13 03:16:39,622 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(7675): creating {ENCODED => cddcec05390828f144388bde2e4e27e4, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689218199586.cddcec05390828f144388bde2e4e27e4.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp 2023-07-13 03:16:39,627 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,i\xBF\x14i\xBE,1689218199586.342abb796ef4c6abe08761532a9a9e44.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 03:16:39,627 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,aaaaa,1689218199586.5877f4ba7f9253faa81517fb25ca27ab.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 03:16:39,628 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1604): Closing 342abb796ef4c6abe08761532a9a9e44, disabling compactions & flushes 2023-07-13 03:16:39,628 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1604): Closing 5877f4ba7f9253faa81517fb25ca27ab, disabling compactions & flushes 2023-07-13 03:16:39,628 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689218199586.342abb796ef4c6abe08761532a9a9e44. 2023-07-13 03:16:39,628 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,aaaaa,1689218199586.5877f4ba7f9253faa81517fb25ca27ab. 2023-07-13 03:16:39,628 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689218199586.342abb796ef4c6abe08761532a9a9e44. 2023-07-13 03:16:39,628 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,aaaaa,1689218199586.5877f4ba7f9253faa81517fb25ca27ab. 2023-07-13 03:16:39,628 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689218199586.342abb796ef4c6abe08761532a9a9e44. after waiting 0 ms 2023-07-13 03:16:39,628 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,aaaaa,1689218199586.5877f4ba7f9253faa81517fb25ca27ab. after waiting 0 ms 2023-07-13 03:16:39,628 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,aaaaa,1689218199586.5877f4ba7f9253faa81517fb25ca27ab. 2023-07-13 03:16:39,628 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689218199586.342abb796ef4c6abe08761532a9a9e44. 2023-07-13 03:16:39,628 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,aaaaa,1689218199586.5877f4ba7f9253faa81517fb25ca27ab. 2023-07-13 03:16:39,628 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,i\xBF\x14i\xBE,1689218199586.342abb796ef4c6abe08761532a9a9e44. 2023-07-13 03:16:39,628 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1558): Region close journal for 5877f4ba7f9253faa81517fb25ca27ab: 2023-07-13 03:16:39,628 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1558): Region close journal for 342abb796ef4c6abe08761532a9a9e44: 2023-07-13 03:16:39,628 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(7675): creating {ENCODED => d97c33df5f6d021e6069ab84ad1303aa, NAME => 'Group_testDisabledTableMove,zzzzz,1689218199586.d97c33df5f6d021e6069ab84ad1303aa.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp 2023-07-13 03:16:39,655 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689218199586.cddcec05390828f144388bde2e4e27e4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 03:16:39,655 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1604): Closing cddcec05390828f144388bde2e4e27e4, disabling compactions & flushes 2023-07-13 03:16:39,655 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689218199586.cddcec05390828f144388bde2e4e27e4. 2023-07-13 03:16:39,655 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689218199586.cddcec05390828f144388bde2e4e27e4. 2023-07-13 03:16:39,655 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689218199586.cddcec05390828f144388bde2e4e27e4. after waiting 0 ms 2023-07-13 03:16:39,655 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689218199586.cddcec05390828f144388bde2e4e27e4. 2023-07-13 03:16:39,655 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689218199586.cddcec05390828f144388bde2e4e27e4. 2023-07-13 03:16:39,655 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1558): Region close journal for cddcec05390828f144388bde2e4e27e4: 2023-07-13 03:16:39,659 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,zzzzz,1689218199586.d97c33df5f6d021e6069ab84ad1303aa.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 03:16:39,659 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1604): Closing d97c33df5f6d021e6069ab84ad1303aa, disabling compactions & flushes 2023-07-13 03:16:39,659 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,zzzzz,1689218199586.d97c33df5f6d021e6069ab84ad1303aa. 2023-07-13 03:16:39,659 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,zzzzz,1689218199586.d97c33df5f6d021e6069ab84ad1303aa. 2023-07-13 03:16:39,659 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,zzzzz,1689218199586.d97c33df5f6d021e6069ab84ad1303aa. after waiting 0 ms 2023-07-13 03:16:39,659 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,zzzzz,1689218199586.d97c33df5f6d021e6069ab84ad1303aa. 2023-07-13 03:16:39,659 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,zzzzz,1689218199586.d97c33df5f6d021e6069ab84ad1303aa. 2023-07-13 03:16:39,659 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1558): Region close journal for d97c33df5f6d021e6069ab84ad1303aa: 2023-07-13 03:16:39,661 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=138, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_ADD_TO_META 2023-07-13 03:16:39,662 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,,1689218199586.996f457c67db58aa19e9f471dd814787.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689218199662"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689218199662"}]},"ts":"1689218199662"} 2023-07-13 03:16:39,662 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,aaaaa,1689218199586.5877f4ba7f9253faa81517fb25ca27ab.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689218199662"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689218199662"}]},"ts":"1689218199662"} 2023-07-13 03:16:39,662 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689218199586.342abb796ef4c6abe08761532a9a9e44.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689218199662"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689218199662"}]},"ts":"1689218199662"} 2023-07-13 03:16:39,662 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689218199586.cddcec05390828f144388bde2e4e27e4.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689218199662"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689218199662"}]},"ts":"1689218199662"} 2023-07-13 03:16:39,662 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,zzzzz,1689218199586.d97c33df5f6d021e6069ab84ad1303aa.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689218199662"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689218199662"}]},"ts":"1689218199662"} 2023-07-13 03:16:39,664 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-13 03:16:39,665 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=138, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-13 03:16:39,665 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689218199665"}]},"ts":"1689218199665"} 2023-07-13 03:16:39,666 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=ENABLING in hbase:meta 2023-07-13 03:16:39,668 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-13 03:16:39,668 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-13 03:16:39,668 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-13 03:16:39,668 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-13 03:16:39,669 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=139, ppid=138, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=996f457c67db58aa19e9f471dd814787, ASSIGN}, {pid=140, ppid=138, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=5877f4ba7f9253faa81517fb25ca27ab, ASSIGN}, {pid=141, ppid=138, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=342abb796ef4c6abe08761532a9a9e44, ASSIGN}, {pid=142, ppid=138, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=cddcec05390828f144388bde2e4e27e4, ASSIGN}, {pid=143, ppid=138, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=d97c33df5f6d021e6069ab84ad1303aa, ASSIGN}] 2023-07-13 03:16:39,671 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=143, ppid=138, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=d97c33df5f6d021e6069ab84ad1303aa, ASSIGN 2023-07-13 03:16:39,671 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=142, ppid=138, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=cddcec05390828f144388bde2e4e27e4, ASSIGN 2023-07-13 03:16:39,671 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=141, ppid=138, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=342abb796ef4c6abe08761532a9a9e44, ASSIGN 2023-07-13 03:16:39,671 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=140, ppid=138, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=5877f4ba7f9253faa81517fb25ca27ab, ASSIGN 2023-07-13 03:16:39,672 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=139, ppid=138, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=996f457c67db58aa19e9f471dd814787, ASSIGN 2023-07-13 03:16:39,672 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=141, ppid=138, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=342abb796ef4c6abe08761532a9a9e44, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,44171,1689218172445; forceNewPlan=false, retain=false 2023-07-13 03:16:39,672 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=140, ppid=138, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=5877f4ba7f9253faa81517fb25ca27ab, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,44325,1689218176275; forceNewPlan=false, retain=false 2023-07-13 03:16:39,672 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=143, ppid=138, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=d97c33df5f6d021e6069ab84ad1303aa, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,44171,1689218172445; forceNewPlan=false, retain=false 2023-07-13 03:16:39,672 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=142, ppid=138, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=cddcec05390828f144388bde2e4e27e4, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,44325,1689218176275; forceNewPlan=false, retain=false 2023-07-13 03:16:39,673 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=139, ppid=138, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=996f457c67db58aa19e9f471dd814787, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,44171,1689218172445; forceNewPlan=false, retain=false 2023-07-13 03:16:39,691 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(1230): Checking to see if procedure is done pid=138 2023-07-13 03:16:39,822 INFO [jenkins-hbase20:33491] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-13 03:16:39,827 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=141 updating hbase:meta row=342abb796ef4c6abe08761532a9a9e44, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,44171,1689218172445 2023-07-13 03:16:39,827 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=139 updating hbase:meta row=996f457c67db58aa19e9f471dd814787, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,44171,1689218172445 2023-07-13 03:16:39,827 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=140 updating hbase:meta row=5877f4ba7f9253faa81517fb25ca27ab, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,44325,1689218176275 2023-07-13 03:16:39,827 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,,1689218199586.996f457c67db58aa19e9f471dd814787.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689218199827"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689218199827"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689218199827"}]},"ts":"1689218199827"} 2023-07-13 03:16:39,827 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,aaaaa,1689218199586.5877f4ba7f9253faa81517fb25ca27ab.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689218199827"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689218199827"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689218199827"}]},"ts":"1689218199827"} 2023-07-13 03:16:39,827 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=143 updating hbase:meta row=d97c33df5f6d021e6069ab84ad1303aa, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,44171,1689218172445 2023-07-13 03:16:39,827 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=142 updating hbase:meta row=cddcec05390828f144388bde2e4e27e4, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,44325,1689218176275 2023-07-13 03:16:39,827 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,zzzzz,1689218199586.d97c33df5f6d021e6069ab84ad1303aa.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689218199827"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689218199827"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689218199827"}]},"ts":"1689218199827"} 2023-07-13 03:16:39,827 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689218199586.342abb796ef4c6abe08761532a9a9e44.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689218199827"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689218199827"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689218199827"}]},"ts":"1689218199827"} 2023-07-13 03:16:39,827 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689218199586.cddcec05390828f144388bde2e4e27e4.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689218199827"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689218199827"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689218199827"}]},"ts":"1689218199827"} 2023-07-13 03:16:39,828 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=144, ppid=139, state=RUNNABLE; OpenRegionProcedure 996f457c67db58aa19e9f471dd814787, server=jenkins-hbase20.apache.org,44171,1689218172445}] 2023-07-13 03:16:39,829 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=145, ppid=140, state=RUNNABLE; OpenRegionProcedure 5877f4ba7f9253faa81517fb25ca27ab, server=jenkins-hbase20.apache.org,44325,1689218176275}] 2023-07-13 03:16:39,830 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=146, ppid=143, state=RUNNABLE; OpenRegionProcedure d97c33df5f6d021e6069ab84ad1303aa, server=jenkins-hbase20.apache.org,44171,1689218172445}] 2023-07-13 03:16:39,834 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=147, ppid=141, state=RUNNABLE; OpenRegionProcedure 342abb796ef4c6abe08761532a9a9e44, server=jenkins-hbase20.apache.org,44171,1689218172445}] 2023-07-13 03:16:39,834 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=148, ppid=142, state=RUNNABLE; OpenRegionProcedure cddcec05390828f144388bde2e4e27e4, server=jenkins-hbase20.apache.org,44325,1689218176275}] 2023-07-13 03:16:39,892 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(1230): Checking to see if procedure is done pid=138 2023-07-13 03:16:39,984 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689218199586.cddcec05390828f144388bde2e4e27e4. 2023-07-13 03:16:39,984 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => cddcec05390828f144388bde2e4e27e4, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689218199586.cddcec05390828f144388bde2e4e27e4.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-13 03:16:39,984 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,i\xBF\x14i\xBE,1689218199586.342abb796ef4c6abe08761532a9a9e44. 2023-07-13 03:16:39,984 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove cddcec05390828f144388bde2e4e27e4 2023-07-13 03:16:39,984 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 342abb796ef4c6abe08761532a9a9e44, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689218199586.342abb796ef4c6abe08761532a9a9e44.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-13 03:16:39,984 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689218199586.cddcec05390828f144388bde2e4e27e4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 03:16:39,985 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for cddcec05390828f144388bde2e4e27e4 2023-07-13 03:16:39,985 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 342abb796ef4c6abe08761532a9a9e44 2023-07-13 03:16:39,985 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for cddcec05390828f144388bde2e4e27e4 2023-07-13 03:16:39,985 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,i\xBF\x14i\xBE,1689218199586.342abb796ef4c6abe08761532a9a9e44.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 03:16:39,985 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 342abb796ef4c6abe08761532a9a9e44 2023-07-13 03:16:39,985 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 342abb796ef4c6abe08761532a9a9e44 2023-07-13 03:16:39,988 INFO [StoreOpener-cddcec05390828f144388bde2e4e27e4-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region cddcec05390828f144388bde2e4e27e4 2023-07-13 03:16:39,988 INFO [StoreOpener-342abb796ef4c6abe08761532a9a9e44-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 342abb796ef4c6abe08761532a9a9e44 2023-07-13 03:16:39,989 DEBUG [StoreOpener-cddcec05390828f144388bde2e4e27e4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testDisabledTableMove/cddcec05390828f144388bde2e4e27e4/f 2023-07-13 03:16:39,990 DEBUG [StoreOpener-342abb796ef4c6abe08761532a9a9e44-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testDisabledTableMove/342abb796ef4c6abe08761532a9a9e44/f 2023-07-13 03:16:39,990 DEBUG [StoreOpener-cddcec05390828f144388bde2e4e27e4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testDisabledTableMove/cddcec05390828f144388bde2e4e27e4/f 2023-07-13 03:16:39,990 DEBUG [StoreOpener-342abb796ef4c6abe08761532a9a9e44-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testDisabledTableMove/342abb796ef4c6abe08761532a9a9e44/f 2023-07-13 03:16:39,990 INFO [StoreOpener-cddcec05390828f144388bde2e4e27e4-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region cddcec05390828f144388bde2e4e27e4 columnFamilyName f 2023-07-13 03:16:39,990 INFO [StoreOpener-342abb796ef4c6abe08761532a9a9e44-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 342abb796ef4c6abe08761532a9a9e44 columnFamilyName f 2023-07-13 03:16:39,991 INFO [StoreOpener-342abb796ef4c6abe08761532a9a9e44-1] regionserver.HStore(310): Store=342abb796ef4c6abe08761532a9a9e44/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 03:16:39,991 INFO [StoreOpener-cddcec05390828f144388bde2e4e27e4-1] regionserver.HStore(310): Store=cddcec05390828f144388bde2e4e27e4/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 03:16:39,992 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testDisabledTableMove/342abb796ef4c6abe08761532a9a9e44 2023-07-13 03:16:39,993 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testDisabledTableMove/cddcec05390828f144388bde2e4e27e4 2023-07-13 03:16:39,993 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testDisabledTableMove/342abb796ef4c6abe08761532a9a9e44 2023-07-13 03:16:39,994 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testDisabledTableMove/cddcec05390828f144388bde2e4e27e4 2023-07-13 03:16:39,997 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for cddcec05390828f144388bde2e4e27e4 2023-07-13 03:16:39,997 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 342abb796ef4c6abe08761532a9a9e44 2023-07-13 03:16:39,999 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testDisabledTableMove/342abb796ef4c6abe08761532a9a9e44/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 03:16:39,999 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testDisabledTableMove/cddcec05390828f144388bde2e4e27e4/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 03:16:40,000 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened cddcec05390828f144388bde2e4e27e4; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11347204640, jitterRate=0.0567907840013504}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 03:16:40,000 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 342abb796ef4c6abe08761532a9a9e44; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9548862080, jitterRate=-0.11069291830062866}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 03:16:40,000 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for cddcec05390828f144388bde2e4e27e4: 2023-07-13 03:16:40,000 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 342abb796ef4c6abe08761532a9a9e44: 2023-07-13 03:16:40,001 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689218199586.cddcec05390828f144388bde2e4e27e4., pid=148, masterSystemTime=1689218199980 2023-07-13 03:16:40,004 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=142 updating hbase:meta row=cddcec05390828f144388bde2e4e27e4, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,44325,1689218176275 2023-07-13 03:16:40,004 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689218199586.cddcec05390828f144388bde2e4e27e4.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689218200004"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689218200004"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689218200004"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689218200004"}]},"ts":"1689218200004"} 2023-07-13 03:16:40,004 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,i\xBF\x14i\xBE,1689218199586.342abb796ef4c6abe08761532a9a9e44., pid=147, masterSystemTime=1689218199980 2023-07-13 03:16:40,005 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689218199586.cddcec05390828f144388bde2e4e27e4. 2023-07-13 03:16:40,005 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689218199586.cddcec05390828f144388bde2e4e27e4. 2023-07-13 03:16:40,005 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,aaaaa,1689218199586.5877f4ba7f9253faa81517fb25ca27ab. 2023-07-13 03:16:40,005 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 5877f4ba7f9253faa81517fb25ca27ab, NAME => 'Group_testDisabledTableMove,aaaaa,1689218199586.5877f4ba7f9253faa81517fb25ca27ab.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-13 03:16:40,005 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 5877f4ba7f9253faa81517fb25ca27ab 2023-07-13 03:16:40,005 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,aaaaa,1689218199586.5877f4ba7f9253faa81517fb25ca27ab.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 03:16:40,005 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 5877f4ba7f9253faa81517fb25ca27ab 2023-07-13 03:16:40,005 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 5877f4ba7f9253faa81517fb25ca27ab 2023-07-13 03:16:40,006 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,i\xBF\x14i\xBE,1689218199586.342abb796ef4c6abe08761532a9a9e44. 2023-07-13 03:16:40,006 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,i\xBF\x14i\xBE,1689218199586.342abb796ef4c6abe08761532a9a9e44. 2023-07-13 03:16:40,006 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,zzzzz,1689218199586.d97c33df5f6d021e6069ab84ad1303aa. 2023-07-13 03:16:40,006 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => d97c33df5f6d021e6069ab84ad1303aa, NAME => 'Group_testDisabledTableMove,zzzzz,1689218199586.d97c33df5f6d021e6069ab84ad1303aa.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-13 03:16:40,006 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=141 updating hbase:meta row=342abb796ef4c6abe08761532a9a9e44, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,44171,1689218172445 2023-07-13 03:16:40,006 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove d97c33df5f6d021e6069ab84ad1303aa 2023-07-13 03:16:40,006 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,zzzzz,1689218199586.d97c33df5f6d021e6069ab84ad1303aa.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 03:16:40,006 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689218199586.342abb796ef4c6abe08761532a9a9e44.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689218200006"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689218200006"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689218200006"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689218200006"}]},"ts":"1689218200006"} 2023-07-13 03:16:40,006 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for d97c33df5f6d021e6069ab84ad1303aa 2023-07-13 03:16:40,007 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for d97c33df5f6d021e6069ab84ad1303aa 2023-07-13 03:16:40,008 INFO [StoreOpener-5877f4ba7f9253faa81517fb25ca27ab-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 5877f4ba7f9253faa81517fb25ca27ab 2023-07-13 03:16:40,010 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=148, resume processing ppid=142 2023-07-13 03:16:40,010 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=148, ppid=142, state=SUCCESS; OpenRegionProcedure cddcec05390828f144388bde2e4e27e4, server=jenkins-hbase20.apache.org,44325,1689218176275 in 173 msec 2023-07-13 03:16:40,011 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=147, resume processing ppid=141 2023-07-13 03:16:40,011 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=147, ppid=141, state=SUCCESS; OpenRegionProcedure 342abb796ef4c6abe08761532a9a9e44, server=jenkins-hbase20.apache.org,44171,1689218172445 in 179 msec 2023-07-13 03:16:40,012 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=142, ppid=138, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=cddcec05390828f144388bde2e4e27e4, ASSIGN in 341 msec 2023-07-13 03:16:40,012 DEBUG [StoreOpener-5877f4ba7f9253faa81517fb25ca27ab-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testDisabledTableMove/5877f4ba7f9253faa81517fb25ca27ab/f 2023-07-13 03:16:40,012 DEBUG [StoreOpener-5877f4ba7f9253faa81517fb25ca27ab-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testDisabledTableMove/5877f4ba7f9253faa81517fb25ca27ab/f 2023-07-13 03:16:40,012 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=141, ppid=138, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=342abb796ef4c6abe08761532a9a9e44, ASSIGN in 342 msec 2023-07-13 03:16:40,012 INFO [StoreOpener-5877f4ba7f9253faa81517fb25ca27ab-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 5877f4ba7f9253faa81517fb25ca27ab columnFamilyName f 2023-07-13 03:16:40,013 INFO [StoreOpener-5877f4ba7f9253faa81517fb25ca27ab-1] regionserver.HStore(310): Store=5877f4ba7f9253faa81517fb25ca27ab/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 03:16:40,023 INFO [StoreOpener-d97c33df5f6d021e6069ab84ad1303aa-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region d97c33df5f6d021e6069ab84ad1303aa 2023-07-13 03:16:40,023 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testDisabledTableMove/5877f4ba7f9253faa81517fb25ca27ab 2023-07-13 03:16:40,024 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testDisabledTableMove/5877f4ba7f9253faa81517fb25ca27ab 2023-07-13 03:16:40,025 DEBUG [StoreOpener-d97c33df5f6d021e6069ab84ad1303aa-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testDisabledTableMove/d97c33df5f6d021e6069ab84ad1303aa/f 2023-07-13 03:16:40,025 DEBUG [StoreOpener-d97c33df5f6d021e6069ab84ad1303aa-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testDisabledTableMove/d97c33df5f6d021e6069ab84ad1303aa/f 2023-07-13 03:16:40,026 INFO [StoreOpener-d97c33df5f6d021e6069ab84ad1303aa-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region d97c33df5f6d021e6069ab84ad1303aa columnFamilyName f 2023-07-13 03:16:40,027 INFO [StoreOpener-d97c33df5f6d021e6069ab84ad1303aa-1] regionserver.HStore(310): Store=d97c33df5f6d021e6069ab84ad1303aa/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 03:16:40,028 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testDisabledTableMove/d97c33df5f6d021e6069ab84ad1303aa 2023-07-13 03:16:40,028 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testDisabledTableMove/d97c33df5f6d021e6069ab84ad1303aa 2023-07-13 03:16:40,031 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for d97c33df5f6d021e6069ab84ad1303aa 2023-07-13 03:16:40,031 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 5877f4ba7f9253faa81517fb25ca27ab 2023-07-13 03:16:40,035 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testDisabledTableMove/d97c33df5f6d021e6069ab84ad1303aa/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 03:16:40,036 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened d97c33df5f6d021e6069ab84ad1303aa; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9487572320, jitterRate=-0.11640097200870514}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 03:16:40,038 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for d97c33df5f6d021e6069ab84ad1303aa: 2023-07-13 03:16:40,039 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,zzzzz,1689218199586.d97c33df5f6d021e6069ab84ad1303aa., pid=146, masterSystemTime=1689218199980 2023-07-13 03:16:40,039 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testDisabledTableMove/5877f4ba7f9253faa81517fb25ca27ab/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 03:16:40,040 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,zzzzz,1689218199586.d97c33df5f6d021e6069ab84ad1303aa. 2023-07-13 03:16:40,041 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,zzzzz,1689218199586.d97c33df5f6d021e6069ab84ad1303aa. 2023-07-13 03:16:40,041 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,,1689218199586.996f457c67db58aa19e9f471dd814787. 2023-07-13 03:16:40,041 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 996f457c67db58aa19e9f471dd814787, NAME => 'Group_testDisabledTableMove,,1689218199586.996f457c67db58aa19e9f471dd814787.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-13 03:16:40,041 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=143 updating hbase:meta row=d97c33df5f6d021e6069ab84ad1303aa, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,44171,1689218172445 2023-07-13 03:16:40,041 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,zzzzz,1689218199586.d97c33df5f6d021e6069ab84ad1303aa.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689218200041"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689218200041"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689218200041"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689218200041"}]},"ts":"1689218200041"} 2023-07-13 03:16:40,041 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 996f457c67db58aa19e9f471dd814787 2023-07-13 03:16:40,041 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,,1689218199586.996f457c67db58aa19e9f471dd814787.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 03:16:40,041 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 996f457c67db58aa19e9f471dd814787 2023-07-13 03:16:40,041 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 996f457c67db58aa19e9f471dd814787 2023-07-13 03:16:40,042 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 5877f4ba7f9253faa81517fb25ca27ab; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11230649920, jitterRate=0.04593577980995178}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 03:16:40,042 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 5877f4ba7f9253faa81517fb25ca27ab: 2023-07-13 03:16:40,043 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,aaaaa,1689218199586.5877f4ba7f9253faa81517fb25ca27ab., pid=145, masterSystemTime=1689218199980 2023-07-13 03:16:40,045 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,aaaaa,1689218199586.5877f4ba7f9253faa81517fb25ca27ab. 2023-07-13 03:16:40,045 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,aaaaa,1689218199586.5877f4ba7f9253faa81517fb25ca27ab. 2023-07-13 03:16:40,045 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=146, resume processing ppid=143 2023-07-13 03:16:40,045 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=146, ppid=143, state=SUCCESS; OpenRegionProcedure d97c33df5f6d021e6069ab84ad1303aa, server=jenkins-hbase20.apache.org,44171,1689218172445 in 212 msec 2023-07-13 03:16:40,045 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=140 updating hbase:meta row=5877f4ba7f9253faa81517fb25ca27ab, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,44325,1689218176275 2023-07-13 03:16:40,045 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,aaaaa,1689218199586.5877f4ba7f9253faa81517fb25ca27ab.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689218200045"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689218200045"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689218200045"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689218200045"}]},"ts":"1689218200045"} 2023-07-13 03:16:40,046 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=143, ppid=138, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=d97c33df5f6d021e6069ab84ad1303aa, ASSIGN in 376 msec 2023-07-13 03:16:40,047 INFO [StoreOpener-996f457c67db58aa19e9f471dd814787-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 996f457c67db58aa19e9f471dd814787 2023-07-13 03:16:40,048 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=145, resume processing ppid=140 2023-07-13 03:16:40,048 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=145, ppid=140, state=SUCCESS; OpenRegionProcedure 5877f4ba7f9253faa81517fb25ca27ab, server=jenkins-hbase20.apache.org,44325,1689218176275 in 217 msec 2023-07-13 03:16:40,049 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=140, ppid=138, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=5877f4ba7f9253faa81517fb25ca27ab, ASSIGN in 379 msec 2023-07-13 03:16:40,049 DEBUG [StoreOpener-996f457c67db58aa19e9f471dd814787-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testDisabledTableMove/996f457c67db58aa19e9f471dd814787/f 2023-07-13 03:16:40,049 DEBUG [StoreOpener-996f457c67db58aa19e9f471dd814787-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testDisabledTableMove/996f457c67db58aa19e9f471dd814787/f 2023-07-13 03:16:40,049 INFO [StoreOpener-996f457c67db58aa19e9f471dd814787-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 996f457c67db58aa19e9f471dd814787 columnFamilyName f 2023-07-13 03:16:40,050 INFO [StoreOpener-996f457c67db58aa19e9f471dd814787-1] regionserver.HStore(310): Store=996f457c67db58aa19e9f471dd814787/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 03:16:40,051 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testDisabledTableMove/996f457c67db58aa19e9f471dd814787 2023-07-13 03:16:40,051 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testDisabledTableMove/996f457c67db58aa19e9f471dd814787 2023-07-13 03:16:40,055 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 996f457c67db58aa19e9f471dd814787 2023-07-13 03:16:40,057 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testDisabledTableMove/996f457c67db58aa19e9f471dd814787/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 03:16:40,057 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 996f457c67db58aa19e9f471dd814787; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9862772160, jitterRate=-0.08145776391029358}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 03:16:40,057 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 996f457c67db58aa19e9f471dd814787: 2023-07-13 03:16:40,058 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,,1689218199586.996f457c67db58aa19e9f471dd814787., pid=144, masterSystemTime=1689218199980 2023-07-13 03:16:40,059 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,,1689218199586.996f457c67db58aa19e9f471dd814787. 2023-07-13 03:16:40,059 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,,1689218199586.996f457c67db58aa19e9f471dd814787. 2023-07-13 03:16:40,060 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=139 updating hbase:meta row=996f457c67db58aa19e9f471dd814787, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,44171,1689218172445 2023-07-13 03:16:40,060 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,,1689218199586.996f457c67db58aa19e9f471dd814787.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689218200060"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689218200060"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689218200060"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689218200060"}]},"ts":"1689218200060"} 2023-07-13 03:16:40,063 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=144, resume processing ppid=139 2023-07-13 03:16:40,064 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=144, ppid=139, state=SUCCESS; OpenRegionProcedure 996f457c67db58aa19e9f471dd814787, server=jenkins-hbase20.apache.org,44171,1689218172445 in 233 msec 2023-07-13 03:16:40,065 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=139, resume processing ppid=138 2023-07-13 03:16:40,065 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=139, ppid=138, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=996f457c67db58aa19e9f471dd814787, ASSIGN in 394 msec 2023-07-13 03:16:40,065 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=138, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-13 03:16:40,066 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689218200065"}]},"ts":"1689218200065"} 2023-07-13 03:16:40,067 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=ENABLED in hbase:meta 2023-07-13 03:16:40,070 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=138, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_POST_OPERATION 2023-07-13 03:16:40,071 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=138, state=SUCCESS; CreateTableProcedure table=Group_testDisabledTableMove in 483 msec 2023-07-13 03:16:40,194 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(1230): Checking to see if procedure is done pid=138 2023-07-13 03:16:40,194 INFO [Listener at localhost.localdomain/36261] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testDisabledTableMove, procId: 138 completed 2023-07-13 03:16:40,194 DEBUG [Listener at localhost.localdomain/36261] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testDisabledTableMove get assigned. Timeout = 60000ms 2023-07-13 03:16:40,194 INFO [Listener at localhost.localdomain/36261] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 03:16:40,199 INFO [Listener at localhost.localdomain/36261] hbase.HBaseTestingUtility(3484): All regions for table Group_testDisabledTableMove assigned to meta. Checking AM states. 2023-07-13 03:16:40,200 INFO [Listener at localhost.localdomain/36261] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 03:16:40,200 INFO [Listener at localhost.localdomain/36261] hbase.HBaseTestingUtility(3504): All regions for table Group_testDisabledTableMove assigned. 2023-07-13 03:16:40,201 INFO [Listener at localhost.localdomain/36261] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 03:16:40,207 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, table=Group_testDisabledTableMove 2023-07-13 03:16:40,208 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-13 03:16:40,210 INFO [Listener at localhost.localdomain/36261] client.HBaseAdmin$15(890): Started disable of Group_testDisabledTableMove 2023-07-13 03:16:40,210 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.HMaster$11(2418): Client=jenkins//148.251.75.209 disable Group_testDisabledTableMove 2023-07-13 03:16:40,211 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] procedure2.ProcedureExecutor(1029): Stored pid=149, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testDisabledTableMove 2023-07-13 03:16:40,214 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(1230): Checking to see if procedure is done pid=149 2023-07-13 03:16:40,215 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689218200215"}]},"ts":"1689218200215"} 2023-07-13 03:16:40,216 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=DISABLING in hbase:meta 2023-07-13 03:16:40,217 INFO [PEWorker-2] procedure.DisableTableProcedure(293): Set Group_testDisabledTableMove to state=DISABLING 2023-07-13 03:16:40,217 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=150, ppid=149, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=996f457c67db58aa19e9f471dd814787, UNASSIGN}, {pid=151, ppid=149, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=5877f4ba7f9253faa81517fb25ca27ab, UNASSIGN}, {pid=152, ppid=149, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=342abb796ef4c6abe08761532a9a9e44, UNASSIGN}, {pid=153, ppid=149, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=cddcec05390828f144388bde2e4e27e4, UNASSIGN}, {pid=154, ppid=149, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=d97c33df5f6d021e6069ab84ad1303aa, UNASSIGN}] 2023-07-13 03:16:40,219 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=150, ppid=149, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=996f457c67db58aa19e9f471dd814787, UNASSIGN 2023-07-13 03:16:40,220 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=152, ppid=149, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=342abb796ef4c6abe08761532a9a9e44, UNASSIGN 2023-07-13 03:16:40,220 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=151, ppid=149, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=5877f4ba7f9253faa81517fb25ca27ab, UNASSIGN 2023-07-13 03:16:40,220 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=154, ppid=149, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=d97c33df5f6d021e6069ab84ad1303aa, UNASSIGN 2023-07-13 03:16:40,220 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=153, ppid=149, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=cddcec05390828f144388bde2e4e27e4, UNASSIGN 2023-07-13 03:16:40,220 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=150 updating hbase:meta row=996f457c67db58aa19e9f471dd814787, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,44171,1689218172445 2023-07-13 03:16:40,220 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,,1689218199586.996f457c67db58aa19e9f471dd814787.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689218200220"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689218200220"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689218200220"}]},"ts":"1689218200220"} 2023-07-13 03:16:40,221 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=155, ppid=150, state=RUNNABLE; CloseRegionProcedure 996f457c67db58aa19e9f471dd814787, server=jenkins-hbase20.apache.org,44171,1689218172445}] 2023-07-13 03:16:40,223 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=152 updating hbase:meta row=342abb796ef4c6abe08761532a9a9e44, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,44171,1689218172445 2023-07-13 03:16:40,223 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=151 updating hbase:meta row=5877f4ba7f9253faa81517fb25ca27ab, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,44325,1689218176275 2023-07-13 03:16:40,223 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689218199586.342abb796ef4c6abe08761532a9a9e44.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689218200223"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689218200223"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689218200223"}]},"ts":"1689218200223"} 2023-07-13 03:16:40,223 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,aaaaa,1689218199586.5877f4ba7f9253faa81517fb25ca27ab.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689218200223"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689218200223"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689218200223"}]},"ts":"1689218200223"} 2023-07-13 03:16:40,223 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=154 updating hbase:meta row=d97c33df5f6d021e6069ab84ad1303aa, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,44171,1689218172445 2023-07-13 03:16:40,223 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,zzzzz,1689218199586.d97c33df5f6d021e6069ab84ad1303aa.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689218200223"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689218200223"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689218200223"}]},"ts":"1689218200223"} 2023-07-13 03:16:40,224 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=153 updating hbase:meta row=cddcec05390828f144388bde2e4e27e4, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,44325,1689218176275 2023-07-13 03:16:40,224 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689218199586.cddcec05390828f144388bde2e4e27e4.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689218200224"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689218200224"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689218200224"}]},"ts":"1689218200224"} 2023-07-13 03:16:40,226 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=156, ppid=152, state=RUNNABLE; CloseRegionProcedure 342abb796ef4c6abe08761532a9a9e44, server=jenkins-hbase20.apache.org,44171,1689218172445}] 2023-07-13 03:16:40,227 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=157, ppid=151, state=RUNNABLE; CloseRegionProcedure 5877f4ba7f9253faa81517fb25ca27ab, server=jenkins-hbase20.apache.org,44325,1689218176275}] 2023-07-13 03:16:40,227 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=159, ppid=153, state=RUNNABLE; CloseRegionProcedure cddcec05390828f144388bde2e4e27e4, server=jenkins-hbase20.apache.org,44325,1689218176275}] 2023-07-13 03:16:40,227 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=158, ppid=154, state=RUNNABLE; CloseRegionProcedure d97c33df5f6d021e6069ab84ad1303aa, server=jenkins-hbase20.apache.org,44171,1689218172445}] 2023-07-13 03:16:40,315 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(1230): Checking to see if procedure is done pid=149 2023-07-13 03:16:40,375 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 996f457c67db58aa19e9f471dd814787 2023-07-13 03:16:40,376 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 996f457c67db58aa19e9f471dd814787, disabling compactions & flushes 2023-07-13 03:16:40,377 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,,1689218199586.996f457c67db58aa19e9f471dd814787. 2023-07-13 03:16:40,377 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,,1689218199586.996f457c67db58aa19e9f471dd814787. 2023-07-13 03:16:40,377 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,,1689218199586.996f457c67db58aa19e9f471dd814787. after waiting 0 ms 2023-07-13 03:16:40,377 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,,1689218199586.996f457c67db58aa19e9f471dd814787. 2023-07-13 03:16:40,379 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close cddcec05390828f144388bde2e4e27e4 2023-07-13 03:16:40,380 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing cddcec05390828f144388bde2e4e27e4, disabling compactions & flushes 2023-07-13 03:16:40,380 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689218199586.cddcec05390828f144388bde2e4e27e4. 2023-07-13 03:16:40,380 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689218199586.cddcec05390828f144388bde2e4e27e4. 2023-07-13 03:16:40,381 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689218199586.cddcec05390828f144388bde2e4e27e4. after waiting 0 ms 2023-07-13 03:16:40,381 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689218199586.cddcec05390828f144388bde2e4e27e4. 2023-07-13 03:16:40,382 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testDisabledTableMove/996f457c67db58aa19e9f471dd814787/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-13 03:16:40,382 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,,1689218199586.996f457c67db58aa19e9f471dd814787. 2023-07-13 03:16:40,382 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 996f457c67db58aa19e9f471dd814787: 2023-07-13 03:16:40,384 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 996f457c67db58aa19e9f471dd814787 2023-07-13 03:16:40,384 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close d97c33df5f6d021e6069ab84ad1303aa 2023-07-13 03:16:40,385 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing d97c33df5f6d021e6069ab84ad1303aa, disabling compactions & flushes 2023-07-13 03:16:40,385 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testDisabledTableMove/cddcec05390828f144388bde2e4e27e4/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-13 03:16:40,385 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,zzzzz,1689218199586.d97c33df5f6d021e6069ab84ad1303aa. 2023-07-13 03:16:40,385 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=150 updating hbase:meta row=996f457c67db58aa19e9f471dd814787, regionState=CLOSED 2023-07-13 03:16:40,385 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,zzzzz,1689218199586.d97c33df5f6d021e6069ab84ad1303aa. 2023-07-13 03:16:40,385 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,zzzzz,1689218199586.d97c33df5f6d021e6069ab84ad1303aa. after waiting 0 ms 2023-07-13 03:16:40,385 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,,1689218199586.996f457c67db58aa19e9f471dd814787.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689218200385"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689218200385"}]},"ts":"1689218200385"} 2023-07-13 03:16:40,385 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,zzzzz,1689218199586.d97c33df5f6d021e6069ab84ad1303aa. 2023-07-13 03:16:40,385 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689218199586.cddcec05390828f144388bde2e4e27e4. 2023-07-13 03:16:40,385 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for cddcec05390828f144388bde2e4e27e4: 2023-07-13 03:16:40,387 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed cddcec05390828f144388bde2e4e27e4 2023-07-13 03:16:40,387 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 5877f4ba7f9253faa81517fb25ca27ab 2023-07-13 03:16:40,388 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 5877f4ba7f9253faa81517fb25ca27ab, disabling compactions & flushes 2023-07-13 03:16:40,388 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,aaaaa,1689218199586.5877f4ba7f9253faa81517fb25ca27ab. 2023-07-13 03:16:40,388 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,aaaaa,1689218199586.5877f4ba7f9253faa81517fb25ca27ab. 2023-07-13 03:16:40,389 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,aaaaa,1689218199586.5877f4ba7f9253faa81517fb25ca27ab. after waiting 0 ms 2023-07-13 03:16:40,389 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,aaaaa,1689218199586.5877f4ba7f9253faa81517fb25ca27ab. 2023-07-13 03:16:40,391 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=153 updating hbase:meta row=cddcec05390828f144388bde2e4e27e4, regionState=CLOSED 2023-07-13 03:16:40,391 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689218199586.cddcec05390828f144388bde2e4e27e4.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689218200391"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689218200391"}]},"ts":"1689218200391"} 2023-07-13 03:16:40,391 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testDisabledTableMove/d97c33df5f6d021e6069ab84ad1303aa/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-13 03:16:40,392 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,zzzzz,1689218199586.d97c33df5f6d021e6069ab84ad1303aa. 2023-07-13 03:16:40,392 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for d97c33df5f6d021e6069ab84ad1303aa: 2023-07-13 03:16:40,392 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=155, resume processing ppid=150 2023-07-13 03:16:40,392 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=155, ppid=150, state=SUCCESS; CloseRegionProcedure 996f457c67db58aa19e9f471dd814787, server=jenkins-hbase20.apache.org,44171,1689218172445 in 165 msec 2023-07-13 03:16:40,393 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed d97c33df5f6d021e6069ab84ad1303aa 2023-07-13 03:16:40,393 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testDisabledTableMove/5877f4ba7f9253faa81517fb25ca27ab/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-13 03:16:40,394 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 342abb796ef4c6abe08761532a9a9e44 2023-07-13 03:16:40,395 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 342abb796ef4c6abe08761532a9a9e44, disabling compactions & flushes 2023-07-13 03:16:40,395 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689218199586.342abb796ef4c6abe08761532a9a9e44. 2023-07-13 03:16:40,395 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689218199586.342abb796ef4c6abe08761532a9a9e44. 2023-07-13 03:16:40,395 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=154 updating hbase:meta row=d97c33df5f6d021e6069ab84ad1303aa, regionState=CLOSED 2023-07-13 03:16:40,395 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=150, ppid=149, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=996f457c67db58aa19e9f471dd814787, UNASSIGN in 176 msec 2023-07-13 03:16:40,396 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,zzzzz,1689218199586.d97c33df5f6d021e6069ab84ad1303aa.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689218200395"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689218200395"}]},"ts":"1689218200395"} 2023-07-13 03:16:40,395 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,aaaaa,1689218199586.5877f4ba7f9253faa81517fb25ca27ab. 2023-07-13 03:16:40,396 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 5877f4ba7f9253faa81517fb25ca27ab: 2023-07-13 03:16:40,395 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689218199586.342abb796ef4c6abe08761532a9a9e44. after waiting 0 ms 2023-07-13 03:16:40,396 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=159, resume processing ppid=153 2023-07-13 03:16:40,396 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689218199586.342abb796ef4c6abe08761532a9a9e44. 2023-07-13 03:16:40,396 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=159, ppid=153, state=SUCCESS; CloseRegionProcedure cddcec05390828f144388bde2e4e27e4, server=jenkins-hbase20.apache.org,44325,1689218176275 in 167 msec 2023-07-13 03:16:40,397 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 5877f4ba7f9253faa81517fb25ca27ab 2023-07-13 03:16:40,398 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=153, ppid=149, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=cddcec05390828f144388bde2e4e27e4, UNASSIGN in 179 msec 2023-07-13 03:16:40,398 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=151 updating hbase:meta row=5877f4ba7f9253faa81517fb25ca27ab, regionState=CLOSED 2023-07-13 03:16:40,398 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,aaaaa,1689218199586.5877f4ba7f9253faa81517fb25ca27ab.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689218200398"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689218200398"}]},"ts":"1689218200398"} 2023-07-13 03:16:40,399 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=158, resume processing ppid=154 2023-07-13 03:16:40,399 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=158, ppid=154, state=SUCCESS; CloseRegionProcedure d97c33df5f6d021e6069ab84ad1303aa, server=jenkins-hbase20.apache.org,44171,1689218172445 in 171 msec 2023-07-13 03:16:40,399 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=154, ppid=149, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=d97c33df5f6d021e6069ab84ad1303aa, UNASSIGN in 182 msec 2023-07-13 03:16:40,400 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/Group_testDisabledTableMove/342abb796ef4c6abe08761532a9a9e44/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-13 03:16:40,401 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,i\xBF\x14i\xBE,1689218199586.342abb796ef4c6abe08761532a9a9e44. 2023-07-13 03:16:40,401 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 342abb796ef4c6abe08761532a9a9e44: 2023-07-13 03:16:40,401 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=157, resume processing ppid=151 2023-07-13 03:16:40,401 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=157, ppid=151, state=SUCCESS; CloseRegionProcedure 5877f4ba7f9253faa81517fb25ca27ab, server=jenkins-hbase20.apache.org,44325,1689218176275 in 174 msec 2023-07-13 03:16:40,402 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=151, ppid=149, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=5877f4ba7f9253faa81517fb25ca27ab, UNASSIGN in 184 msec 2023-07-13 03:16:40,402 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 342abb796ef4c6abe08761532a9a9e44 2023-07-13 03:16:40,402 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=152 updating hbase:meta row=342abb796ef4c6abe08761532a9a9e44, regionState=CLOSED 2023-07-13 03:16:40,402 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689218199586.342abb796ef4c6abe08761532a9a9e44.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689218200402"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689218200402"}]},"ts":"1689218200402"} 2023-07-13 03:16:40,407 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=156, resume processing ppid=152 2023-07-13 03:16:40,407 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=156, ppid=152, state=SUCCESS; CloseRegionProcedure 342abb796ef4c6abe08761532a9a9e44, server=jenkins-hbase20.apache.org,44171,1689218172445 in 177 msec 2023-07-13 03:16:40,409 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=152, resume processing ppid=149 2023-07-13 03:16:40,409 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=152, ppid=149, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=342abb796ef4c6abe08761532a9a9e44, UNASSIGN in 190 msec 2023-07-13 03:16:40,410 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689218200409"}]},"ts":"1689218200409"} 2023-07-13 03:16:40,411 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=DISABLED in hbase:meta 2023-07-13 03:16:40,412 INFO [PEWorker-3] procedure.DisableTableProcedure(305): Set Group_testDisabledTableMove to state=DISABLED 2023-07-13 03:16:40,414 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=149, state=SUCCESS; DisableTableProcedure table=Group_testDisabledTableMove in 202 msec 2023-07-13 03:16:40,516 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(1230): Checking to see if procedure is done pid=149 2023-07-13 03:16:40,516 INFO [Listener at localhost.localdomain/36261] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testDisabledTableMove, procId: 149 completed 2023-07-13 03:16:40,516 INFO [Listener at localhost.localdomain/36261] rsgroup.TestRSGroupsAdmin1(370): Moving table Group_testDisabledTableMove to Group_testDisabledTableMove_1437575117 2023-07-13 03:16:40,518 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [Group_testDisabledTableMove] to rsgroup Group_testDisabledTableMove_1437575117 2023-07-13 03:16:40,520 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:40,521 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1437575117 2023-07-13 03:16:40,521 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 03:16:40,521 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 03:16:40,523 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(336): Skipping move regions because the table Group_testDisabledTableMove is disabled 2023-07-13 03:16:40,523 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testDisabledTableMove_1437575117, current retry=0 2023-07-13 03:16:40,523 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testDisabledTableMove] moved to target group Group_testDisabledTableMove_1437575117. 2023-07-13 03:16:40,523 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-13 03:16:40,526 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:40,526 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:40,529 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, table=Group_testDisabledTableMove 2023-07-13 03:16:40,529 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-13 03:16:40,530 INFO [Listener at localhost.localdomain/36261] client.HBaseAdmin$15(890): Started disable of Group_testDisabledTableMove 2023-07-13 03:16:40,531 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.HMaster$11(2418): Client=jenkins//148.251.75.209 disable Group_testDisabledTableMove 2023-07-13 03:16:40,531 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.TableNotEnabledException: Group_testDisabledTableMove at org.apache.hadoop.hbase.master.procedure.AbstractStateMachineTableProcedure.preflightChecks(AbstractStateMachineTableProcedure.java:163) at org.apache.hadoop.hbase.master.procedure.DisableTableProcedure.(DisableTableProcedure.java:78) at org.apache.hadoop.hbase.master.HMaster$11.run(HMaster.java:2429) at org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:132) at org.apache.hadoop.hbase.master.HMaster.disableTable(HMaster.java:2413) at org.apache.hadoop.hbase.master.MasterRpcServices.disableTable(MasterRpcServices.java:787) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 03:16:40,531 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] ipc.CallRunner(144): callId: 924 service: MasterService methodName: DisableTable size: 88 connection: 148.251.75.209:45566 deadline: 1689218260531, exception=org.apache.hadoop.hbase.TableNotEnabledException: Group_testDisabledTableMove 2023-07-13 03:16:40,531 DEBUG [Listener at localhost.localdomain/36261] hbase.HBaseTestingUtility(1826): Table: Group_testDisabledTableMove already disabled, so just deleting it. 2023-07-13 03:16:40,532 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.HMaster$5(2228): Client=jenkins//148.251.75.209 delete Group_testDisabledTableMove 2023-07-13 03:16:40,533 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] procedure2.ProcedureExecutor(1029): Stored pid=161, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-13 03:16:40,534 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=161, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-13 03:16:40,534 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testDisabledTableMove' from rsgroup 'Group_testDisabledTableMove_1437575117' 2023-07-13 03:16:40,535 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=161, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-13 03:16:40,536 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:40,536 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1437575117 2023-07-13 03:16:40,537 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 03:16:40,537 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 03:16:40,539 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(1230): Checking to see if procedure is done pid=161 2023-07-13 03:16:40,540 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'unmovedTable' 2023-07-13 03:16:40,541 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testDisabledTableMove/996f457c67db58aa19e9f471dd814787 2023-07-13 03:16:40,541 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testDisabledTableMove/d97c33df5f6d021e6069ab84ad1303aa 2023-07-13 03:16:40,541 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testDisabledTableMove/cddcec05390828f144388bde2e4e27e4 2023-07-13 03:16:40,541 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testDisabledTableMove/342abb796ef4c6abe08761532a9a9e44 2023-07-13 03:16:40,541 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testDisabledTableMove/5877f4ba7f9253faa81517fb25ca27ab 2023-07-13 03:16:40,543 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testDisabledTableMove/342abb796ef4c6abe08761532a9a9e44/f, FileablePath, hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testDisabledTableMove/342abb796ef4c6abe08761532a9a9e44/recovered.edits] 2023-07-13 03:16:40,544 DEBUG [HFileArchiver-7] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testDisabledTableMove/5877f4ba7f9253faa81517fb25ca27ab/f, FileablePath, hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testDisabledTableMove/5877f4ba7f9253faa81517fb25ca27ab/recovered.edits] 2023-07-13 03:16:40,544 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testDisabledTableMove/cddcec05390828f144388bde2e4e27e4/f, FileablePath, hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testDisabledTableMove/cddcec05390828f144388bde2e4e27e4/recovered.edits] 2023-07-13 03:16:40,544 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testDisabledTableMove/996f457c67db58aa19e9f471dd814787/f, FileablePath, hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testDisabledTableMove/996f457c67db58aa19e9f471dd814787/recovered.edits] 2023-07-13 03:16:40,544 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testDisabledTableMove/d97c33df5f6d021e6069ab84ad1303aa/f, FileablePath, hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testDisabledTableMove/d97c33df5f6d021e6069ab84ad1303aa/recovered.edits] 2023-07-13 03:16:40,551 DEBUG [HFileArchiver-7] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testDisabledTableMove/5877f4ba7f9253faa81517fb25ca27ab/recovered.edits/4.seqid to hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/archive/data/default/Group_testDisabledTableMove/5877f4ba7f9253faa81517fb25ca27ab/recovered.edits/4.seqid 2023-07-13 03:16:40,551 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testDisabledTableMove/996f457c67db58aa19e9f471dd814787/recovered.edits/4.seqid to hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/archive/data/default/Group_testDisabledTableMove/996f457c67db58aa19e9f471dd814787/recovered.edits/4.seqid 2023-07-13 03:16:40,551 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testDisabledTableMove/342abb796ef4c6abe08761532a9a9e44/recovered.edits/4.seqid to hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/archive/data/default/Group_testDisabledTableMove/342abb796ef4c6abe08761532a9a9e44/recovered.edits/4.seqid 2023-07-13 03:16:40,551 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testDisabledTableMove/cddcec05390828f144388bde2e4e27e4/recovered.edits/4.seqid to hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/archive/data/default/Group_testDisabledTableMove/cddcec05390828f144388bde2e4e27e4/recovered.edits/4.seqid 2023-07-13 03:16:40,551 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testDisabledTableMove/d97c33df5f6d021e6069ab84ad1303aa/recovered.edits/4.seqid to hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/archive/data/default/Group_testDisabledTableMove/d97c33df5f6d021e6069ab84ad1303aa/recovered.edits/4.seqid 2023-07-13 03:16:40,551 DEBUG [HFileArchiver-7] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testDisabledTableMove/5877f4ba7f9253faa81517fb25ca27ab 2023-07-13 03:16:40,551 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testDisabledTableMove/996f457c67db58aa19e9f471dd814787 2023-07-13 03:16:40,552 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testDisabledTableMove/342abb796ef4c6abe08761532a9a9e44 2023-07-13 03:16:40,552 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testDisabledTableMove/cddcec05390828f144388bde2e4e27e4 2023-07-13 03:16:40,552 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/.tmp/data/default/Group_testDisabledTableMove/d97c33df5f6d021e6069ab84ad1303aa 2023-07-13 03:16:40,552 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived Group_testDisabledTableMove regions 2023-07-13 03:16:40,554 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=161, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-13 03:16:40,556 WARN [PEWorker-5] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testDisabledTableMove from hbase:meta 2023-07-13 03:16:40,561 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(421): Removing 'Group_testDisabledTableMove' descriptor. 2023-07-13 03:16:40,562 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=161, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-13 03:16:40,562 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(411): Removing 'Group_testDisabledTableMove' from region states. 2023-07-13 03:16:40,562 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,,1689218199586.996f457c67db58aa19e9f471dd814787.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689218200562"}]},"ts":"9223372036854775807"} 2023-07-13 03:16:40,563 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,aaaaa,1689218199586.5877f4ba7f9253faa81517fb25ca27ab.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689218200562"}]},"ts":"9223372036854775807"} 2023-07-13 03:16:40,563 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689218199586.342abb796ef4c6abe08761532a9a9e44.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689218200562"}]},"ts":"9223372036854775807"} 2023-07-13 03:16:40,563 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689218199586.cddcec05390828f144388bde2e4e27e4.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689218200562"}]},"ts":"9223372036854775807"} 2023-07-13 03:16:40,563 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,zzzzz,1689218199586.d97c33df5f6d021e6069ab84ad1303aa.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689218200562"}]},"ts":"9223372036854775807"} 2023-07-13 03:16:40,565 INFO [PEWorker-5] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-13 03:16:40,565 DEBUG [PEWorker-5] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 996f457c67db58aa19e9f471dd814787, NAME => 'Group_testDisabledTableMove,,1689218199586.996f457c67db58aa19e9f471dd814787.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => 5877f4ba7f9253faa81517fb25ca27ab, NAME => 'Group_testDisabledTableMove,aaaaa,1689218199586.5877f4ba7f9253faa81517fb25ca27ab.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => 342abb796ef4c6abe08761532a9a9e44, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689218199586.342abb796ef4c6abe08761532a9a9e44.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => cddcec05390828f144388bde2e4e27e4, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689218199586.cddcec05390828f144388bde2e4e27e4.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => d97c33df5f6d021e6069ab84ad1303aa, NAME => 'Group_testDisabledTableMove,zzzzz,1689218199586.d97c33df5f6d021e6069ab84ad1303aa.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-13 03:16:40,565 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(415): Marking 'Group_testDisabledTableMove' as deleted. 2023-07-13 03:16:40,565 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689218200565"}]},"ts":"9223372036854775807"} 2023-07-13 03:16:40,566 INFO [PEWorker-5] hbase.MetaTableAccessor(1658): Deleted table Group_testDisabledTableMove state from META 2023-07-13 03:16:40,568 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(130): Finished pid=161, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-13 03:16:40,569 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=161, state=SUCCESS; DeleteTableProcedure table=Group_testDisabledTableMove in 36 msec 2023-07-13 03:16:40,640 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(1230): Checking to see if procedure is done pid=161 2023-07-13 03:16:40,641 INFO [Listener at localhost.localdomain/36261] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testDisabledTableMove, procId: 161 completed 2023-07-13 03:16:40,644 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:40,644 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:40,645 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-13 03:16:40,645 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 03:16:40,645 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-13 03:16:40,645 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:32993, jenkins-hbase20.apache.org:37181] to rsgroup default 2023-07-13 03:16:40,647 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:40,648 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1437575117 2023-07-13 03:16:40,648 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 03:16:40,648 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 03:16:40,649 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testDisabledTableMove_1437575117, current retry=0 2023-07-13 03:16:40,649 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase20.apache.org,32993,1689218172776, jenkins-hbase20.apache.org,37181,1689218172183] are moved back to Group_testDisabledTableMove_1437575117 2023-07-13 03:16:40,649 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testDisabledTableMove_1437575117 => default 2023-07-13 03:16:40,649 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-13 03:16:40,650 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup Group_testDisabledTableMove_1437575117 2023-07-13 03:16:40,654 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:40,654 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 03:16:40,655 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-13 03:16:40,663 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 03:16:40,664 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-13 03:16:40,664 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 03:16:40,664 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-13 03:16:40,665 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-13 03:16:40,665 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-13 03:16:40,666 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup master 2023-07-13 03:16:40,669 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:40,669 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-13 03:16:40,671 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 03:16:40,674 INFO [Listener at localhost.localdomain/36261] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-13 03:16:40,675 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-13 03:16:40,676 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:40,677 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 03:16:40,679 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 03:16:40,681 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 03:16:40,688 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:40,688 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:40,690 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:33491] to rsgroup master 2023-07-13 03:16:40,690 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33491 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 03:16:40,691 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] ipc.CallRunner(144): callId: 958 service: MasterService methodName: ExecMasterService size: 120 connection: 148.251.75.209:45566 deadline: 1689219400690, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33491 is either offline or it does not exist. 2023-07-13 03:16:40,691 WARN [Listener at localhost.localdomain/36261] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33491 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33491 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-13 03:16:40,692 INFO [Listener at localhost.localdomain/36261] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 03:16:40,693 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:40,693 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:40,694 INFO [Listener at localhost.localdomain/36261] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:32993, jenkins-hbase20.apache.org:37181, jenkins-hbase20.apache.org:44171, jenkins-hbase20.apache.org:44325], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-13 03:16:40,694 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-13 03:16:40,694 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 03:16:40,716 INFO [Listener at localhost.localdomain/36261] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testDisabledTableMove Thread=523 (was 521) Potentially hanging thread: hconnection-0x2b42746c-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1725869577_17 at /127.0.0.1:55274 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2cd1b0c2-shared-pool-26 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1866409043_17 at /127.0.0.1:59208 [Waiting for operation #15] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=831 (was 801) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=490 (was 490), ProcessCount=170 (was 170), AvailableMemoryMB=3485 (was 3565) 2023-07-13 03:16:40,716 WARN [Listener at localhost.localdomain/36261] hbase.ResourceChecker(130): Thread=523 is superior to 500 2023-07-13 03:16:40,733 INFO [Listener at localhost.localdomain/36261] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRSGroupListDoesNotContainFailedTableCreation Thread=523, OpenFileDescriptor=831, MaxFileDescriptor=60000, SystemLoadAverage=490, ProcessCount=170, AvailableMemoryMB=3485 2023-07-13 03:16:40,733 WARN [Listener at localhost.localdomain/36261] hbase.ResourceChecker(130): Thread=523 is superior to 500 2023-07-13 03:16:40,735 INFO [Listener at localhost.localdomain/36261] rsgroup.TestRSGroupsBase(132): testRSGroupListDoesNotContainFailedTableCreation 2023-07-13 03:16:40,738 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:40,738 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:40,739 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-13 03:16:40,739 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 03:16:40,739 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-13 03:16:40,740 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-13 03:16:40,740 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-13 03:16:40,741 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup master 2023-07-13 03:16:40,744 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:40,745 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-13 03:16:40,746 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 03:16:40,754 INFO [Listener at localhost.localdomain/36261] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-13 03:16:40,756 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-13 03:16:40,758 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:40,758 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 03:16:40,760 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 03:16:40,760 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 03:16:40,763 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:40,763 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:40,765 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:33491] to rsgroup master 2023-07-13 03:16:40,765 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33491 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 03:16:40,765 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] ipc.CallRunner(144): callId: 986 service: MasterService methodName: ExecMasterService size: 120 connection: 148.251.75.209:45566 deadline: 1689219400765, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33491 is either offline or it does not exist. 2023-07-13 03:16:40,765 WARN [Listener at localhost.localdomain/36261] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33491 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:33491 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-13 03:16:40,767 INFO [Listener at localhost.localdomain/36261] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 03:16:40,768 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:40,768 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:40,768 INFO [Listener at localhost.localdomain/36261] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:32993, jenkins-hbase20.apache.org:37181, jenkins-hbase20.apache.org:44171, jenkins-hbase20.apache.org:44325], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-13 03:16:40,768 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-13 03:16:40,768 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33491] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 03:16:40,769 INFO [Listener at localhost.localdomain/36261] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-13 03:16:40,769 INFO [Listener at localhost.localdomain/36261] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-13 03:16:40,769 DEBUG [Listener at localhost.localdomain/36261] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x62fd740f to 127.0.0.1:56998 2023-07-13 03:16:40,769 DEBUG [Listener at localhost.localdomain/36261] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 03:16:40,770 DEBUG [Listener at localhost.localdomain/36261] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-13 03:16:40,771 DEBUG [Listener at localhost.localdomain/36261] util.JVMClusterUtil(257): Found active master hash=82629146, stopped=false 2023-07-13 03:16:40,771 DEBUG [Listener at localhost.localdomain/36261] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-13 03:16:40,771 DEBUG [Listener at localhost.localdomain/36261] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-13 03:16:40,771 INFO [Listener at localhost.localdomain/36261] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase20.apache.org,33491,1689218169949 2023-07-13 03:16:40,772 DEBUG [Listener at localhost.localdomain/36261-EventThread] zookeeper.ZKWatcher(600): regionserver:44325-0x1008454350d000b, quorum=127.0.0.1:56998, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-13 03:16:40,772 DEBUG [Listener at localhost.localdomain/36261-EventThread] zookeeper.ZKWatcher(600): regionserver:32993-0x1008454350d0003, quorum=127.0.0.1:56998, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-13 03:16:40,772 DEBUG [Listener at localhost.localdomain/36261-EventThread] zookeeper.ZKWatcher(600): regionserver:37181-0x1008454350d0001, quorum=127.0.0.1:56998, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-13 03:16:40,773 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:44325-0x1008454350d000b, quorum=127.0.0.1:56998, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 03:16:40,773 DEBUG [Listener at localhost.localdomain/36261-EventThread] zookeeper.ZKWatcher(600): regionserver:44171-0x1008454350d0002, quorum=127.0.0.1:56998, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-13 03:16:40,773 DEBUG [Listener at localhost.localdomain/36261-EventThread] zookeeper.ZKWatcher(600): master:33491-0x1008454350d0000, quorum=127.0.0.1:56998, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-13 03:16:40,773 INFO [Listener at localhost.localdomain/36261] procedure2.ProcedureExecutor(629): Stopping 2023-07-13 03:16:40,773 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:37181-0x1008454350d0001, quorum=127.0.0.1:56998, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 03:16:40,773 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:32993-0x1008454350d0003, quorum=127.0.0.1:56998, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 03:16:40,773 DEBUG [Listener at localhost.localdomain/36261-EventThread] zookeeper.ZKWatcher(600): master:33491-0x1008454350d0000, quorum=127.0.0.1:56998, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 03:16:40,773 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:44171-0x1008454350d0002, quorum=127.0.0.1:56998, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 03:16:40,773 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:33491-0x1008454350d0000, quorum=127.0.0.1:56998, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 03:16:40,774 DEBUG [Listener at localhost.localdomain/36261] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x1d79db27 to 127.0.0.1:56998 2023-07-13 03:16:40,774 DEBUG [Listener at localhost.localdomain/36261] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 03:16:40,774 INFO [Listener at localhost.localdomain/36261] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase20.apache.org,37181,1689218172183' ***** 2023-07-13 03:16:40,774 INFO [Listener at localhost.localdomain/36261] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-13 03:16:40,775 INFO [RS:0;jenkins-hbase20:37181] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-13 03:16:40,776 INFO [Listener at localhost.localdomain/36261] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase20.apache.org,44171,1689218172445' ***** 2023-07-13 03:16:40,776 INFO [Listener at localhost.localdomain/36261] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-13 03:16:40,777 INFO [Listener at localhost.localdomain/36261] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase20.apache.org,32993,1689218172776' ***** 2023-07-13 03:16:40,779 INFO [Listener at localhost.localdomain/36261] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-13 03:16:40,777 INFO [RS:1;jenkins-hbase20:44171] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-13 03:16:40,783 INFO [RS:2;jenkins-hbase20:32993] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-13 03:16:40,783 INFO [Listener at localhost.localdomain/36261] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase20.apache.org,44325,1689218176275' ***** 2023-07-13 03:16:40,786 INFO [Listener at localhost.localdomain/36261] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-13 03:16:40,787 INFO [RS:3;jenkins-hbase20:44325] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-13 03:16:40,805 INFO [RS:3;jenkins-hbase20:44325] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@26dddac4{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-13 03:16:40,805 INFO [RS:2;jenkins-hbase20:32993] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@187365af{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-13 03:16:40,805 INFO [RS:0;jenkins-hbase20:37181] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@4bb093d2{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-13 03:16:40,805 INFO [RS:1;jenkins-hbase20:44171] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@7a6083cf{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-13 03:16:40,809 INFO [RS:2;jenkins-hbase20:32993] server.AbstractConnector(383): Stopped ServerConnector@6a985c61{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-13 03:16:40,809 INFO [RS:1;jenkins-hbase20:44171] server.AbstractConnector(383): Stopped ServerConnector@15e5beaf{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-13 03:16:40,809 INFO [RS:3;jenkins-hbase20:44325] server.AbstractConnector(383): Stopped ServerConnector@171b7e62{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-13 03:16:40,809 INFO [RS:0;jenkins-hbase20:37181] server.AbstractConnector(383): Stopped ServerConnector@1f3aef2c{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-13 03:16:40,809 INFO [RS:3;jenkins-hbase20:44325] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-13 03:16:40,809 INFO [RS:1;jenkins-hbase20:44171] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-13 03:16:40,809 INFO [RS:2;jenkins-hbase20:32993] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-13 03:16:40,809 INFO [RS:0;jenkins-hbase20:37181] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-13 03:16:40,813 INFO [RS:1;jenkins-hbase20:44171] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@4c7d57dc{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-13 03:16:40,812 INFO [RS:3;jenkins-hbase20:44325] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6149e6b5{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-13 03:16:40,814 INFO [RS:1;jenkins-hbase20:44171] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@45c0801a{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9a468761-9605-9fc9-5826-02909870e5fb/hadoop.log.dir/,STOPPED} 2023-07-13 03:16:40,814 INFO [RS:0;jenkins-hbase20:37181] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@3f7c5c45{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-13 03:16:40,813 INFO [RS:2;jenkins-hbase20:32993] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1c116fa5{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-13 03:16:40,816 INFO [RS:0;jenkins-hbase20:37181] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2501c389{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9a468761-9605-9fc9-5826-02909870e5fb/hadoop.log.dir/,STOPPED} 2023-07-13 03:16:40,816 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-13 03:16:40,815 INFO [RS:3;jenkins-hbase20:44325] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@3120b24b{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9a468761-9605-9fc9-5826-02909870e5fb/hadoop.log.dir/,STOPPED} 2023-07-13 03:16:40,816 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-13 03:16:40,816 INFO [regionserver/jenkins-hbase20:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-13 03:16:40,816 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-13 03:16:40,817 INFO [RS:2;jenkins-hbase20:32993] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@47636af2{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9a468761-9605-9fc9-5826-02909870e5fb/hadoop.log.dir/,STOPPED} 2023-07-13 03:16:40,819 INFO [RS:2;jenkins-hbase20:32993] regionserver.HeapMemoryManager(220): Stopping 2023-07-13 03:16:40,819 INFO [RS:3;jenkins-hbase20:44325] regionserver.HeapMemoryManager(220): Stopping 2023-07-13 03:16:40,819 INFO [RS:1;jenkins-hbase20:44171] regionserver.HeapMemoryManager(220): Stopping 2023-07-13 03:16:40,819 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-13 03:16:40,819 INFO [RS:2;jenkins-hbase20:32993] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-13 03:16:40,820 INFO [RS:1;jenkins-hbase20:44171] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-13 03:16:40,820 INFO [RS:2;jenkins-hbase20:32993] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-13 03:16:40,820 INFO [RS:2;jenkins-hbase20:32993] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,32993,1689218172776 2023-07-13 03:16:40,820 DEBUG [RS:2;jenkins-hbase20:32993] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x50f77fb1 to 127.0.0.1:56998 2023-07-13 03:16:40,820 DEBUG [RS:2;jenkins-hbase20:32993] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 03:16:40,820 INFO [RS:2;jenkins-hbase20:32993] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,32993,1689218172776; all regions closed. 2023-07-13 03:16:40,820 INFO [RS:3;jenkins-hbase20:44325] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-13 03:16:40,821 INFO [RS:3;jenkins-hbase20:44325] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-13 03:16:40,821 INFO [regionserver/jenkins-hbase20:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-13 03:16:40,821 INFO [RS:3;jenkins-hbase20:44325] regionserver.HRegionServer(3305): Received CLOSE for f819c5469435fdc78753bc4f41cd4d89 2023-07-13 03:16:40,822 INFO [RS:3;jenkins-hbase20:44325] regionserver.HRegionServer(3305): Received CLOSE for 7c4e74675a07c3fb9472d5b7eb467f88 2023-07-13 03:16:40,822 INFO [RS:3;jenkins-hbase20:44325] regionserver.HRegionServer(3305): Received CLOSE for 184069995e46652ffc86537736197d8c 2023-07-13 03:16:40,822 INFO [RS:3;jenkins-hbase20:44325] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,44325,1689218176275 2023-07-13 03:16:40,822 DEBUG [RS:3;jenkins-hbase20:44325] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x485d8eb9 to 127.0.0.1:56998 2023-07-13 03:16:40,822 DEBUG [RS:3;jenkins-hbase20:44325] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 03:16:40,822 INFO [RS:3;jenkins-hbase20:44325] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-13 03:16:40,822 INFO [RS:3;jenkins-hbase20:44325] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-13 03:16:40,820 INFO [RS:0;jenkins-hbase20:37181] regionserver.HeapMemoryManager(220): Stopping 2023-07-13 03:16:40,823 INFO [RS:0;jenkins-hbase20:37181] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-13 03:16:40,823 INFO [RS:0;jenkins-hbase20:37181] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-13 03:16:40,823 INFO [RS:0;jenkins-hbase20:37181] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,37181,1689218172183 2023-07-13 03:16:40,823 DEBUG [RS:0;jenkins-hbase20:37181] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x60d1f91a to 127.0.0.1:56998 2023-07-13 03:16:40,823 DEBUG [RS:0;jenkins-hbase20:37181] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 03:16:40,823 INFO [RS:0;jenkins-hbase20:37181] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,37181,1689218172183; all regions closed. 2023-07-13 03:16:40,820 INFO [RS:1;jenkins-hbase20:44171] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-13 03:16:40,822 INFO [RS:3;jenkins-hbase20:44325] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-13 03:16:40,822 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing f819c5469435fdc78753bc4f41cd4d89, disabling compactions & flushes 2023-07-13 03:16:40,821 INFO [regionserver/jenkins-hbase20:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-13 03:16:40,821 INFO [regionserver/jenkins-hbase20:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-13 03:16:40,823 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689218175169.f819c5469435fdc78753bc4f41cd4d89. 2023-07-13 03:16:40,823 INFO [RS:3;jenkins-hbase20:44325] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-13 03:16:40,823 INFO [RS:1;jenkins-hbase20:44171] regionserver.HRegionServer(3305): Received CLOSE for 98191189e1297e8d8e6d58f3c26a3bea 2023-07-13 03:16:40,823 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689218175169.f819c5469435fdc78753bc4f41cd4d89. 2023-07-13 03:16:40,824 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689218175169.f819c5469435fdc78753bc4f41cd4d89. after waiting 0 ms 2023-07-13 03:16:40,824 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689218175169.f819c5469435fdc78753bc4f41cd4d89. 2023-07-13 03:16:40,824 INFO [RS:1;jenkins-hbase20:44171] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,44171,1689218172445 2023-07-13 03:16:40,824 DEBUG [RS:1;jenkins-hbase20:44171] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x775d90c9 to 127.0.0.1:56998 2023-07-13 03:16:40,824 DEBUG [RS:1;jenkins-hbase20:44171] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 03:16:40,824 INFO [RS:1;jenkins-hbase20:44171] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-13 03:16:40,824 DEBUG [RS:1;jenkins-hbase20:44171] regionserver.HRegionServer(1478): Online Regions={98191189e1297e8d8e6d58f3c26a3bea=testRename,,1689218193435.98191189e1297e8d8e6d58f3c26a3bea.} 2023-07-13 03:16:40,825 DEBUG [RS:1;jenkins-hbase20:44171] regionserver.HRegionServer(1504): Waiting on 98191189e1297e8d8e6d58f3c26a3bea 2023-07-13 03:16:40,832 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 98191189e1297e8d8e6d58f3c26a3bea, disabling compactions & flushes 2023-07-13 03:16:40,832 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region testRename,,1689218193435.98191189e1297e8d8e6d58f3c26a3bea. 2023-07-13 03:16:40,832 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689218193435.98191189e1297e8d8e6d58f3c26a3bea. 2023-07-13 03:16:40,832 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689218193435.98191189e1297e8d8e6d58f3c26a3bea. after waiting 0 ms 2023-07-13 03:16:40,832 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689218193435.98191189e1297e8d8e6d58f3c26a3bea. 2023-07-13 03:16:40,832 INFO [RS:3;jenkins-hbase20:44325] regionserver.HRegionServer(1474): Waiting on 4 regions to close 2023-07-13 03:16:40,832 DEBUG [RS:3;jenkins-hbase20:44325] regionserver.HRegionServer(1478): Online Regions={f819c5469435fdc78753bc4f41cd4d89=hbase:namespace,,1689218175169.f819c5469435fdc78753bc4f41cd4d89., 7c4e74675a07c3fb9472d5b7eb467f88=hbase:rsgroup,,1689218175378.7c4e74675a07c3fb9472d5b7eb467f88., 184069995e46652ffc86537736197d8c=unmovedTable,,1689218195622.184069995e46652ffc86537736197d8c., 1588230740=hbase:meta,,1.1588230740} 2023-07-13 03:16:40,833 DEBUG [RS:3;jenkins-hbase20:44325] regionserver.HRegionServer(1504): Waiting on 1588230740, 184069995e46652ffc86537736197d8c, 7c4e74675a07c3fb9472d5b7eb467f88, f819c5469435fdc78753bc4f41cd4d89 2023-07-13 03:16:40,834 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-13 03:16:40,834 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-13 03:16:40,834 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-13 03:16:40,834 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-13 03:16:40,834 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-13 03:16:40,834 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=37.51 KB heapSize=61.24 KB 2023-07-13 03:16:40,858 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/namespace/f819c5469435fdc78753bc4f41cd4d89/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=9 2023-07-13 03:16:40,864 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/testRename/98191189e1297e8d8e6d58f3c26a3bea/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-13 03:16:40,864 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689218175169.f819c5469435fdc78753bc4f41cd4d89. 2023-07-13 03:16:40,865 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for f819c5469435fdc78753bc4f41cd4d89: 2023-07-13 03:16:40,865 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689218175169.f819c5469435fdc78753bc4f41cd4d89. 2023-07-13 03:16:40,866 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed testRename,,1689218193435.98191189e1297e8d8e6d58f3c26a3bea. 2023-07-13 03:16:40,866 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 98191189e1297e8d8e6d58f3c26a3bea: 2023-07-13 03:16:40,866 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed testRename,,1689218193435.98191189e1297e8d8e6d58f3c26a3bea. 2023-07-13 03:16:40,867 DEBUG [RS:2;jenkins-hbase20:32993] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/oldWALs 2023-07-13 03:16:40,867 INFO [RS:2;jenkins-hbase20:32993] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase20.apache.org%2C32993%2C1689218172776:(num 1689218174698) 2023-07-13 03:16:40,867 DEBUG [RS:2;jenkins-hbase20:32993] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 03:16:40,867 INFO [RS:2;jenkins-hbase20:32993] regionserver.LeaseManager(133): Closed leases 2023-07-13 03:16:40,869 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 7c4e74675a07c3fb9472d5b7eb467f88, disabling compactions & flushes 2023-07-13 03:16:40,869 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689218175378.7c4e74675a07c3fb9472d5b7eb467f88. 2023-07-13 03:16:40,869 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689218175378.7c4e74675a07c3fb9472d5b7eb467f88. 2023-07-13 03:16:40,869 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689218175378.7c4e74675a07c3fb9472d5b7eb467f88. after waiting 0 ms 2023-07-13 03:16:40,869 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689218175378.7c4e74675a07c3fb9472d5b7eb467f88. 2023-07-13 03:16:40,870 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing 7c4e74675a07c3fb9472d5b7eb467f88 1/1 column families, dataSize=22.37 KB heapSize=36.89 KB 2023-07-13 03:16:40,878 INFO [RS:2;jenkins-hbase20:32993] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase20:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-13 03:16:40,878 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-13 03:16:40,878 INFO [RS:2;jenkins-hbase20:32993] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-13 03:16:40,878 INFO [RS:2;jenkins-hbase20:32993] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-13 03:16:40,878 INFO [RS:2;jenkins-hbase20:32993] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-13 03:16:40,878 DEBUG [RS:0;jenkins-hbase20:37181] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/oldWALs 2023-07-13 03:16:40,880 INFO [RS:2;jenkins-hbase20:32993] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:32993 2023-07-13 03:16:40,880 INFO [RS:0;jenkins-hbase20:37181] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase20.apache.org%2C37181%2C1689218172183.meta:.meta(num 1689218174904) 2023-07-13 03:16:40,915 DEBUG [RS:0;jenkins-hbase20:37181] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/oldWALs 2023-07-13 03:16:40,915 INFO [RS:0;jenkins-hbase20:37181] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase20.apache.org%2C37181%2C1689218172183:(num 1689218174694) 2023-07-13 03:16:40,915 DEBUG [RS:0;jenkins-hbase20:37181] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 03:16:40,915 INFO [RS:0;jenkins-hbase20:37181] regionserver.LeaseManager(133): Closed leases 2023-07-13 03:16:40,921 INFO [RS:0;jenkins-hbase20:37181] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase20:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-13 03:16:40,925 DEBUG [Listener at localhost.localdomain/36261-EventThread] zookeeper.ZKWatcher(600): regionserver:44325-0x1008454350d000b, quorum=127.0.0.1:56998, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,32993,1689218172776 2023-07-13 03:16:40,925 DEBUG [Listener at localhost.localdomain/36261-EventThread] zookeeper.ZKWatcher(600): regionserver:37181-0x1008454350d0001, quorum=127.0.0.1:56998, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,32993,1689218172776 2023-07-13 03:16:40,925 DEBUG [Listener at localhost.localdomain/36261-EventThread] zookeeper.ZKWatcher(600): regionserver:44325-0x1008454350d000b, quorum=127.0.0.1:56998, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 03:16:40,927 DEBUG [Listener at localhost.localdomain/36261-EventThread] zookeeper.ZKWatcher(600): regionserver:44171-0x1008454350d0002, quorum=127.0.0.1:56998, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,32993,1689218172776 2023-07-13 03:16:40,927 DEBUG [Listener at localhost.localdomain/36261-EventThread] zookeeper.ZKWatcher(600): regionserver:32993-0x1008454350d0003, quorum=127.0.0.1:56998, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,32993,1689218172776 2023-07-13 03:16:40,927 DEBUG [Listener at localhost.localdomain/36261-EventThread] zookeeper.ZKWatcher(600): regionserver:32993-0x1008454350d0003, quorum=127.0.0.1:56998, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 03:16:40,926 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-13 03:16:40,926 INFO [RS:0;jenkins-hbase20:37181] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-13 03:16:40,927 DEBUG [Listener at localhost.localdomain/36261-EventThread] zookeeper.ZKWatcher(600): regionserver:44171-0x1008454350d0002, quorum=127.0.0.1:56998, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 03:16:40,927 INFO [RS:0;jenkins-hbase20:37181] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-13 03:16:40,927 INFO [RS:0;jenkins-hbase20:37181] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-13 03:16:40,927 DEBUG [Listener at localhost.localdomain/36261-EventThread] zookeeper.ZKWatcher(600): regionserver:37181-0x1008454350d0001, quorum=127.0.0.1:56998, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 03:16:40,927 DEBUG [Listener at localhost.localdomain/36261-EventThread] zookeeper.ZKWatcher(600): master:33491-0x1008454350d0000, quorum=127.0.0.1:56998, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 03:16:40,928 INFO [RS:0;jenkins-hbase20:37181] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:37181 2023-07-13 03:16:40,930 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase20.apache.org,32993,1689218172776] 2023-07-13 03:16:40,930 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase20.apache.org,32993,1689218172776; numProcessing=1 2023-07-13 03:16:40,932 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase20.apache.org,32993,1689218172776 already deleted, retry=false 2023-07-13 03:16:40,932 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase20.apache.org,32993,1689218172776 expired; onlineServers=3 2023-07-13 03:16:40,932 DEBUG [Listener at localhost.localdomain/36261-EventThread] zookeeper.ZKWatcher(600): regionserver:44171-0x1008454350d0002, quorum=127.0.0.1:56998, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,37181,1689218172183 2023-07-13 03:16:40,933 DEBUG [Listener at localhost.localdomain/36261-EventThread] zookeeper.ZKWatcher(600): regionserver:44325-0x1008454350d000b, quorum=127.0.0.1:56998, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,37181,1689218172183 2023-07-13 03:16:40,933 DEBUG [Listener at localhost.localdomain/36261-EventThread] zookeeper.ZKWatcher(600): regionserver:37181-0x1008454350d0001, quorum=127.0.0.1:56998, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,37181,1689218172183 2023-07-13 03:16:40,933 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=22.37 KB at sequenceid=107 (bloomFilter=true), to=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/rsgroup/7c4e74675a07c3fb9472d5b7eb467f88/.tmp/m/3c3b98865bf1420b8714a8a143f7c5c4 2023-07-13 03:16:40,933 DEBUG [Listener at localhost.localdomain/36261-EventThread] zookeeper.ZKWatcher(600): master:33491-0x1008454350d0000, quorum=127.0.0.1:56998, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 03:16:40,933 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase20.apache.org,37181,1689218172183] 2023-07-13 03:16:40,933 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase20.apache.org,37181,1689218172183; numProcessing=2 2023-07-13 03:16:40,934 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase20.apache.org,37181,1689218172183 already deleted, retry=false 2023-07-13 03:16:40,934 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase20.apache.org,37181,1689218172183 expired; onlineServers=2 2023-07-13 03:16:40,940 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 3c3b98865bf1420b8714a8a143f7c5c4 2023-07-13 03:16:40,941 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/rsgroup/7c4e74675a07c3fb9472d5b7eb467f88/.tmp/m/3c3b98865bf1420b8714a8a143f7c5c4 as hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/rsgroup/7c4e74675a07c3fb9472d5b7eb467f88/m/3c3b98865bf1420b8714a8a143f7c5c4 2023-07-13 03:16:40,948 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 3c3b98865bf1420b8714a8a143f7c5c4 2023-07-13 03:16:40,948 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/rsgroup/7c4e74675a07c3fb9472d5b7eb467f88/m/3c3b98865bf1420b8714a8a143f7c5c4, entries=22, sequenceid=107, filesize=5.9 K 2023-07-13 03:16:40,950 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~22.37 KB/22907, heapSize ~36.88 KB/37760, currentSize=0 B/0 for 7c4e74675a07c3fb9472d5b7eb467f88 in 80ms, sequenceid=107, compaction requested=true 2023-07-13 03:16:40,964 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/rsgroup/7c4e74675a07c3fb9472d5b7eb467f88/recovered.edits/110.seqid, newMaxSeqId=110, maxSeqId=35 2023-07-13 03:16:40,965 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-13 03:16:40,965 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689218175378.7c4e74675a07c3fb9472d5b7eb467f88. 2023-07-13 03:16:40,966 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 7c4e74675a07c3fb9472d5b7eb467f88: 2023-07-13 03:16:40,966 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689218175378.7c4e74675a07c3fb9472d5b7eb467f88. 2023-07-13 03:16:40,968 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 184069995e46652ffc86537736197d8c, disabling compactions & flushes 2023-07-13 03:16:40,968 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689218195622.184069995e46652ffc86537736197d8c. 2023-07-13 03:16:40,968 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689218195622.184069995e46652ffc86537736197d8c. 2023-07-13 03:16:40,968 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689218195622.184069995e46652ffc86537736197d8c. after waiting 0 ms 2023-07-13 03:16:40,968 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689218195622.184069995e46652ffc86537736197d8c. 2023-07-13 03:16:40,971 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/default/unmovedTable/184069995e46652ffc86537736197d8c/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-13 03:16:40,972 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689218195622.184069995e46652ffc86537736197d8c. 2023-07-13 03:16:40,972 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 184069995e46652ffc86537736197d8c: 2023-07-13 03:16:40,972 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed unmovedTable,,1689218195622.184069995e46652ffc86537736197d8c. 2023-07-13 03:16:41,025 INFO [RS:1;jenkins-hbase20:44171] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,44171,1689218172445; all regions closed. 2023-07-13 03:16:41,031 DEBUG [RS:1;jenkins-hbase20:44171] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/oldWALs 2023-07-13 03:16:41,031 INFO [RS:1;jenkins-hbase20:44171] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase20.apache.org%2C44171%2C1689218172445.meta:.meta(num 1689218177570) 2023-07-13 03:16:41,033 DEBUG [RS:3;jenkins-hbase20:44325] regionserver.HRegionServer(1504): Waiting on 1588230740 2023-07-13 03:16:41,039 DEBUG [RS:1;jenkins-hbase20:44171] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/oldWALs 2023-07-13 03:16:41,039 INFO [RS:1;jenkins-hbase20:44171] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase20.apache.org%2C44171%2C1689218172445:(num 1689218174698) 2023-07-13 03:16:41,039 DEBUG [RS:1;jenkins-hbase20:44171] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 03:16:41,039 INFO [RS:1;jenkins-hbase20:44171] regionserver.LeaseManager(133): Closed leases 2023-07-13 03:16:41,040 INFO [RS:1;jenkins-hbase20:44171] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase20:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-13 03:16:41,040 INFO [RS:1;jenkins-hbase20:44171] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-13 03:16:41,040 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-13 03:16:41,040 INFO [RS:1;jenkins-hbase20:44171] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-13 03:16:41,040 INFO [RS:1;jenkins-hbase20:44171] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-13 03:16:41,041 INFO [RS:1;jenkins-hbase20:44171] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:44171 2023-07-13 03:16:41,042 DEBUG [Listener at localhost.localdomain/36261-EventThread] zookeeper.ZKWatcher(600): regionserver:44325-0x1008454350d000b, quorum=127.0.0.1:56998, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,44171,1689218172445 2023-07-13 03:16:41,042 DEBUG [Listener at localhost.localdomain/36261-EventThread] zookeeper.ZKWatcher(600): master:33491-0x1008454350d0000, quorum=127.0.0.1:56998, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 03:16:41,042 DEBUG [Listener at localhost.localdomain/36261-EventThread] zookeeper.ZKWatcher(600): regionserver:44171-0x1008454350d0002, quorum=127.0.0.1:56998, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,44171,1689218172445 2023-07-13 03:16:41,042 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase20.apache.org,44171,1689218172445] 2023-07-13 03:16:41,042 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase20.apache.org,44171,1689218172445; numProcessing=3 2023-07-13 03:16:41,043 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase20.apache.org,44171,1689218172445 already deleted, retry=false 2023-07-13 03:16:41,043 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase20.apache.org,44171,1689218172445 expired; onlineServers=1 2023-07-13 03:16:41,233 DEBUG [RS:3;jenkins-hbase20:44325] regionserver.HRegionServer(1504): Waiting on 1588230740 2023-07-13 03:16:41,274 DEBUG [Listener at localhost.localdomain/36261-EventThread] zookeeper.ZKWatcher(600): regionserver:44171-0x1008454350d0002, quorum=127.0.0.1:56998, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 03:16:41,274 DEBUG [Listener at localhost.localdomain/36261-EventThread] zookeeper.ZKWatcher(600): regionserver:44171-0x1008454350d0002, quorum=127.0.0.1:56998, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 03:16:41,274 INFO [RS:1;jenkins-hbase20:44171] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,44171,1689218172445; zookeeper connection closed. 2023-07-13 03:16:41,275 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@63096bec] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@63096bec 2023-07-13 03:16:41,314 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=34.59 KB at sequenceid=220 (bloomFilter=false), to=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/meta/1588230740/.tmp/info/91c9ec45469c4988a1098cdfbe112743 2023-07-13 03:16:41,322 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 91c9ec45469c4988a1098cdfbe112743 2023-07-13 03:16:41,333 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=868 B at sequenceid=220 (bloomFilter=false), to=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/meta/1588230740/.tmp/rep_barrier/133b3f16ea25463091b8da14daab4ffd 2023-07-13 03:16:41,338 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 133b3f16ea25463091b8da14daab4ffd 2023-07-13 03:16:41,348 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.07 KB at sequenceid=220 (bloomFilter=false), to=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/meta/1588230740/.tmp/table/e7c241fd188c4414bc2f0b441a9fde25 2023-07-13 03:16:41,353 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for e7c241fd188c4414bc2f0b441a9fde25 2023-07-13 03:16:41,354 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/meta/1588230740/.tmp/info/91c9ec45469c4988a1098cdfbe112743 as hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/meta/1588230740/info/91c9ec45469c4988a1098cdfbe112743 2023-07-13 03:16:41,361 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 91c9ec45469c4988a1098cdfbe112743 2023-07-13 03:16:41,361 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/meta/1588230740/info/91c9ec45469c4988a1098cdfbe112743, entries=62, sequenceid=220, filesize=11.8 K 2023-07-13 03:16:41,363 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/meta/1588230740/.tmp/rep_barrier/133b3f16ea25463091b8da14daab4ffd as hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/meta/1588230740/rep_barrier/133b3f16ea25463091b8da14daab4ffd 2023-07-13 03:16:41,369 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 133b3f16ea25463091b8da14daab4ffd 2023-07-13 03:16:41,369 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/meta/1588230740/rep_barrier/133b3f16ea25463091b8da14daab4ffd, entries=8, sequenceid=220, filesize=5.8 K 2023-07-13 03:16:41,370 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/meta/1588230740/.tmp/table/e7c241fd188c4414bc2f0b441a9fde25 as hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/meta/1588230740/table/e7c241fd188c4414bc2f0b441a9fde25 2023-07-13 03:16:41,375 DEBUG [Listener at localhost.localdomain/36261-EventThread] zookeeper.ZKWatcher(600): regionserver:37181-0x1008454350d0001, quorum=127.0.0.1:56998, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 03:16:41,375 INFO [RS:0;jenkins-hbase20:37181] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,37181,1689218172183; zookeeper connection closed. 2023-07-13 03:16:41,375 DEBUG [Listener at localhost.localdomain/36261-EventThread] zookeeper.ZKWatcher(600): regionserver:37181-0x1008454350d0001, quorum=127.0.0.1:56998, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 03:16:41,375 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@56e1f44f] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@56e1f44f 2023-07-13 03:16:41,377 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for e7c241fd188c4414bc2f0b441a9fde25 2023-07-13 03:16:41,377 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/meta/1588230740/table/e7c241fd188c4414bc2f0b441a9fde25, entries=16, sequenceid=220, filesize=6.0 K 2023-07-13 03:16:41,378 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~37.51 KB/38410, heapSize ~61.20 KB/62664, currentSize=0 B/0 for 1588230740 in 544ms, sequenceid=220, compaction requested=true 2023-07-13 03:16:41,393 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/data/hbase/meta/1588230740/recovered.edits/223.seqid, newMaxSeqId=223, maxSeqId=108 2023-07-13 03:16:41,393 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-13 03:16:41,394 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-13 03:16:41,394 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-13 03:16:41,394 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-13 03:16:41,433 INFO [RS:3;jenkins-hbase20:44325] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,44325,1689218176275; all regions closed. 2023-07-13 03:16:41,441 DEBUG [RS:3;jenkins-hbase20:44325] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/oldWALs 2023-07-13 03:16:41,442 INFO [RS:3;jenkins-hbase20:44325] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase20.apache.org%2C44325%2C1689218176275.meta:.meta(num 1689218184408) 2023-07-13 03:16:41,449 DEBUG [RS:3;jenkins-hbase20:44325] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/oldWALs 2023-07-13 03:16:41,449 INFO [RS:3;jenkins-hbase20:44325] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase20.apache.org%2C44325%2C1689218176275:(num 1689218176690) 2023-07-13 03:16:41,449 DEBUG [RS:3;jenkins-hbase20:44325] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 03:16:41,449 INFO [RS:3;jenkins-hbase20:44325] regionserver.LeaseManager(133): Closed leases 2023-07-13 03:16:41,449 INFO [RS:3;jenkins-hbase20:44325] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase20:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-13 03:16:41,449 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-13 03:16:41,450 INFO [RS:3;jenkins-hbase20:44325] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:44325 2023-07-13 03:16:41,455 DEBUG [Listener at localhost.localdomain/36261-EventThread] zookeeper.ZKWatcher(600): regionserver:44325-0x1008454350d000b, quorum=127.0.0.1:56998, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,44325,1689218176275 2023-07-13 03:16:41,455 DEBUG [Listener at localhost.localdomain/36261-EventThread] zookeeper.ZKWatcher(600): master:33491-0x1008454350d0000, quorum=127.0.0.1:56998, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 03:16:41,459 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase20.apache.org,44325,1689218176275] 2023-07-13 03:16:41,459 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase20.apache.org,44325,1689218176275; numProcessing=4 2023-07-13 03:16:41,461 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase20.apache.org,44325,1689218176275 already deleted, retry=false 2023-07-13 03:16:41,461 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase20.apache.org,44325,1689218176275 expired; onlineServers=0 2023-07-13 03:16:41,461 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase20.apache.org,33491,1689218169949' ***** 2023-07-13 03:16:41,461 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-13 03:16:41,462 DEBUG [M:0;jenkins-hbase20:33491] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@45d1d821, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-07-13 03:16:41,462 INFO [M:0;jenkins-hbase20:33491] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-13 03:16:41,466 DEBUG [Listener at localhost.localdomain/36261-EventThread] zookeeper.ZKWatcher(600): master:33491-0x1008454350d0000, quorum=127.0.0.1:56998, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-13 03:16:41,466 INFO [M:0;jenkins-hbase20:33491] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@51bc735f{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-13 03:16:41,466 DEBUG [Listener at localhost.localdomain/36261-EventThread] zookeeper.ZKWatcher(600): master:33491-0x1008454350d0000, quorum=127.0.0.1:56998, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 03:16:41,466 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:33491-0x1008454350d0000, quorum=127.0.0.1:56998, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-13 03:16:41,467 INFO [M:0;jenkins-hbase20:33491] server.AbstractConnector(383): Stopped ServerConnector@1d5a3f5b{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-13 03:16:41,467 INFO [M:0;jenkins-hbase20:33491] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-13 03:16:41,468 INFO [M:0;jenkins-hbase20:33491] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@76f839dd{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-13 03:16:41,469 INFO [M:0;jenkins-hbase20:33491] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@37734faf{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9a468761-9605-9fc9-5826-02909870e5fb/hadoop.log.dir/,STOPPED} 2023-07-13 03:16:41,470 INFO [M:0;jenkins-hbase20:33491] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,33491,1689218169949 2023-07-13 03:16:41,470 INFO [M:0;jenkins-hbase20:33491] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,33491,1689218169949; all regions closed. 2023-07-13 03:16:41,470 DEBUG [M:0;jenkins-hbase20:33491] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 03:16:41,470 INFO [M:0;jenkins-hbase20:33491] master.HMaster(1491): Stopping master jetty server 2023-07-13 03:16:41,471 INFO [M:0;jenkins-hbase20:33491] server.AbstractConnector(383): Stopped ServerConnector@5e4f6e80{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-13 03:16:41,471 DEBUG [M:0;jenkins-hbase20:33491] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-13 03:16:41,472 DEBUG [M:0;jenkins-hbase20:33491] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-13 03:16:41,472 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1689218174293] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1689218174293,5,FailOnTimeoutGroup] 2023-07-13 03:16:41,472 INFO [M:0;jenkins-hbase20:33491] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-13 03:16:41,472 INFO [M:0;jenkins-hbase20:33491] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-13 03:16:41,472 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1689218174294] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1689218174294,5,FailOnTimeoutGroup] 2023-07-13 03:16:41,472 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-13 03:16:41,472 INFO [M:0;jenkins-hbase20:33491] hbase.ChoreService(369): Chore service for: master/jenkins-hbase20:0 had [] on shutdown 2023-07-13 03:16:41,472 DEBUG [M:0;jenkins-hbase20:33491] master.HMaster(1512): Stopping service threads 2023-07-13 03:16:41,472 INFO [M:0;jenkins-hbase20:33491] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-13 03:16:41,473 ERROR [M:0;jenkins-hbase20:33491] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] Thread[HFileArchiver-1,5,PEWorkerGroup] Thread[HFileArchiver-2,5,PEWorkerGroup] Thread[HFileArchiver-3,5,PEWorkerGroup] Thread[HFileArchiver-4,5,PEWorkerGroup] Thread[HFileArchiver-5,5,PEWorkerGroup] Thread[HFileArchiver-6,5,PEWorkerGroup] Thread[HFileArchiver-7,5,PEWorkerGroup] Thread[HFileArchiver-8,5,PEWorkerGroup] 2023-07-13 03:16:41,473 INFO [M:0;jenkins-hbase20:33491] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-13 03:16:41,473 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-13 03:16:41,474 DEBUG [M:0;jenkins-hbase20:33491] zookeeper.ZKUtil(398): master:33491-0x1008454350d0000, quorum=127.0.0.1:56998, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-13 03:16:41,474 WARN [M:0;jenkins-hbase20:33491] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-13 03:16:41,474 INFO [M:0;jenkins-hbase20:33491] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-13 03:16:41,474 INFO [M:0;jenkins-hbase20:33491] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-13 03:16:41,474 DEBUG [M:0;jenkins-hbase20:33491] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-13 03:16:41,474 INFO [M:0;jenkins-hbase20:33491] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-13 03:16:41,474 DEBUG [M:0;jenkins-hbase20:33491] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-13 03:16:41,474 DEBUG [M:0;jenkins-hbase20:33491] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-13 03:16:41,474 DEBUG [M:0;jenkins-hbase20:33491] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-13 03:16:41,474 INFO [M:0;jenkins-hbase20:33491] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=539.39 KB heapSize=645.84 KB 2023-07-13 03:16:41,475 DEBUG [Listener at localhost.localdomain/36261-EventThread] zookeeper.ZKWatcher(600): regionserver:32993-0x1008454350d0003, quorum=127.0.0.1:56998, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 03:16:41,475 DEBUG [Listener at localhost.localdomain/36261-EventThread] zookeeper.ZKWatcher(600): regionserver:32993-0x1008454350d0003, quorum=127.0.0.1:56998, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 03:16:41,475 INFO [RS:2;jenkins-hbase20:32993] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,32993,1689218172776; zookeeper connection closed. 2023-07-13 03:16:41,475 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@69ba05b5] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@69ba05b5 2023-07-13 03:16:41,495 INFO [M:0;jenkins-hbase20:33491] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=539.39 KB at sequenceid=1200 (bloomFilter=true), to=hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/7ffcd1680c4e48938168ca9ad88fa496 2023-07-13 03:16:41,502 DEBUG [M:0;jenkins-hbase20:33491] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/7ffcd1680c4e48938168ca9ad88fa496 as hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/7ffcd1680c4e48938168ca9ad88fa496 2023-07-13 03:16:41,508 INFO [M:0;jenkins-hbase20:33491] regionserver.HStore(1080): Added hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/7ffcd1680c4e48938168ca9ad88fa496, entries=160, sequenceid=1200, filesize=28.1 K 2023-07-13 03:16:41,509 INFO [M:0;jenkins-hbase20:33491] regionserver.HRegion(2948): Finished flush of dataSize ~539.39 KB/552334, heapSize ~645.83 KB/661328, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 35ms, sequenceid=1200, compaction requested=false 2023-07-13 03:16:41,511 INFO [M:0;jenkins-hbase20:33491] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-13 03:16:41,511 DEBUG [M:0;jenkins-hbase20:33491] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-13 03:16:41,514 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-13 03:16:41,514 INFO [M:0;jenkins-hbase20:33491] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-13 03:16:41,515 INFO [M:0;jenkins-hbase20:33491] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:33491 2023-07-13 03:16:41,517 DEBUG [M:0;jenkins-hbase20:33491] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase20.apache.org,33491,1689218169949 already deleted, retry=false 2023-07-13 03:16:41,675 DEBUG [Listener at localhost.localdomain/36261-EventThread] zookeeper.ZKWatcher(600): master:33491-0x1008454350d0000, quorum=127.0.0.1:56998, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 03:16:41,675 INFO [M:0;jenkins-hbase20:33491] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,33491,1689218169949; zookeeper connection closed. 2023-07-13 03:16:41,675 DEBUG [Listener at localhost.localdomain/36261-EventThread] zookeeper.ZKWatcher(600): master:33491-0x1008454350d0000, quorum=127.0.0.1:56998, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 03:16:41,775 DEBUG [Listener at localhost.localdomain/36261-EventThread] zookeeper.ZKWatcher(600): regionserver:44325-0x1008454350d000b, quorum=127.0.0.1:56998, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 03:16:41,775 INFO [RS:3;jenkins-hbase20:44325] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,44325,1689218176275; zookeeper connection closed. 2023-07-13 03:16:41,776 DEBUG [Listener at localhost.localdomain/36261-EventThread] zookeeper.ZKWatcher(600): regionserver:44325-0x1008454350d000b, quorum=127.0.0.1:56998, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 03:16:41,776 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@446b4966] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@446b4966 2023-07-13 03:16:41,776 INFO [Listener at localhost.localdomain/36261] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 4 regionserver(s) complete 2023-07-13 03:16:41,776 WARN [Listener at localhost.localdomain/36261] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-13 03:16:41,779 INFO [Listener at localhost.localdomain/36261] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-13 03:16:41,882 WARN [BP-28934839-148.251.75.209-1689218166310 heartbeating to localhost.localdomain/127.0.0.1:34135] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-13 03:16:41,883 WARN [BP-28934839-148.251.75.209-1689218166310 heartbeating to localhost.localdomain/127.0.0.1:34135] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-28934839-148.251.75.209-1689218166310 (Datanode Uuid f14afc60-6791-474c-b55e-56b1db75c49b) service to localhost.localdomain/127.0.0.1:34135 2023-07-13 03:16:41,885 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9a468761-9605-9fc9-5826-02909870e5fb/cluster_5365af11-0016-b950-934d-d6cdde7e87b7/dfs/data/data5/current/BP-28934839-148.251.75.209-1689218166310] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-13 03:16:41,885 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9a468761-9605-9fc9-5826-02909870e5fb/cluster_5365af11-0016-b950-934d-d6cdde7e87b7/dfs/data/data6/current/BP-28934839-148.251.75.209-1689218166310] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-13 03:16:41,888 WARN [Listener at localhost.localdomain/36261] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-13 03:16:41,890 INFO [Listener at localhost.localdomain/36261] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-13 03:16:41,993 WARN [BP-28934839-148.251.75.209-1689218166310 heartbeating to localhost.localdomain/127.0.0.1:34135] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-13 03:16:41,993 WARN [BP-28934839-148.251.75.209-1689218166310 heartbeating to localhost.localdomain/127.0.0.1:34135] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-28934839-148.251.75.209-1689218166310 (Datanode Uuid 2fbab921-4a8f-41a9-bccf-ecb06e44ce10) service to localhost.localdomain/127.0.0.1:34135 2023-07-13 03:16:41,994 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9a468761-9605-9fc9-5826-02909870e5fb/cluster_5365af11-0016-b950-934d-d6cdde7e87b7/dfs/data/data3/current/BP-28934839-148.251.75.209-1689218166310] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-13 03:16:41,995 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9a468761-9605-9fc9-5826-02909870e5fb/cluster_5365af11-0016-b950-934d-d6cdde7e87b7/dfs/data/data4/current/BP-28934839-148.251.75.209-1689218166310] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-13 03:16:41,995 WARN [Listener at localhost.localdomain/36261] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-13 03:16:42,010 INFO [Listener at localhost.localdomain/36261] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-13 03:16:42,014 WARN [BP-28934839-148.251.75.209-1689218166310 heartbeating to localhost.localdomain/127.0.0.1:34135] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-13 03:16:42,014 WARN [BP-28934839-148.251.75.209-1689218166310 heartbeating to localhost.localdomain/127.0.0.1:34135] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-28934839-148.251.75.209-1689218166310 (Datanode Uuid 4609a206-ce53-4981-8365-9d39736f9c95) service to localhost.localdomain/127.0.0.1:34135 2023-07-13 03:16:42,015 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9a468761-9605-9fc9-5826-02909870e5fb/cluster_5365af11-0016-b950-934d-d6cdde7e87b7/dfs/data/data1/current/BP-28934839-148.251.75.209-1689218166310] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-13 03:16:42,015 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9a468761-9605-9fc9-5826-02909870e5fb/cluster_5365af11-0016-b950-934d-d6cdde7e87b7/dfs/data/data2/current/BP-28934839-148.251.75.209-1689218166310] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-13 03:16:42,042 INFO [Listener at localhost.localdomain/36261] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:0 2023-07-13 03:16:42,163 INFO [Listener at localhost.localdomain/36261] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-13 03:16:42,226 INFO [Listener at localhost.localdomain/36261] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-07-13 03:16:42,226 INFO [Listener at localhost.localdomain/36261] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-13 03:16:42,227 INFO [Listener at localhost.localdomain/36261] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9a468761-9605-9fc9-5826-02909870e5fb/hadoop.log.dir so I do NOT create it in target/test-data/9f570025-7ca0-cdf3-6f73-22764e481636 2023-07-13 03:16:42,227 INFO [Listener at localhost.localdomain/36261] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9a468761-9605-9fc9-5826-02909870e5fb/hadoop.tmp.dir so I do NOT create it in target/test-data/9f570025-7ca0-cdf3-6f73-22764e481636 2023-07-13 03:16:42,227 INFO [Listener at localhost.localdomain/36261] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9f570025-7ca0-cdf3-6f73-22764e481636/cluster_253178af-242e-c4f7-9f5d-96fe9741eada, deleteOnExit=true 2023-07-13 03:16:42,227 INFO [Listener at localhost.localdomain/36261] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-13 03:16:42,227 INFO [Listener at localhost.localdomain/36261] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9f570025-7ca0-cdf3-6f73-22764e481636/test.cache.data in system properties and HBase conf 2023-07-13 03:16:42,227 INFO [Listener at localhost.localdomain/36261] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9f570025-7ca0-cdf3-6f73-22764e481636/hadoop.tmp.dir in system properties and HBase conf 2023-07-13 03:16:42,227 INFO [Listener at localhost.localdomain/36261] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9f570025-7ca0-cdf3-6f73-22764e481636/hadoop.log.dir in system properties and HBase conf 2023-07-13 03:16:42,227 INFO [Listener at localhost.localdomain/36261] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9f570025-7ca0-cdf3-6f73-22764e481636/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-13 03:16:42,227 INFO [Listener at localhost.localdomain/36261] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9f570025-7ca0-cdf3-6f73-22764e481636/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-13 03:16:42,227 INFO [Listener at localhost.localdomain/36261] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-13 03:16:42,227 DEBUG [Listener at localhost.localdomain/36261] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-13 03:16:42,228 INFO [Listener at localhost.localdomain/36261] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9f570025-7ca0-cdf3-6f73-22764e481636/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-13 03:16:42,228 INFO [Listener at localhost.localdomain/36261] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9f570025-7ca0-cdf3-6f73-22764e481636/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-13 03:16:42,228 INFO [Listener at localhost.localdomain/36261] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9f570025-7ca0-cdf3-6f73-22764e481636/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-13 03:16:42,228 INFO [Listener at localhost.localdomain/36261] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9f570025-7ca0-cdf3-6f73-22764e481636/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-13 03:16:42,228 INFO [Listener at localhost.localdomain/36261] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9f570025-7ca0-cdf3-6f73-22764e481636/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-13 03:16:42,228 INFO [Listener at localhost.localdomain/36261] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9f570025-7ca0-cdf3-6f73-22764e481636/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-13 03:16:42,228 INFO [Listener at localhost.localdomain/36261] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9f570025-7ca0-cdf3-6f73-22764e481636/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-13 03:16:42,228 INFO [Listener at localhost.localdomain/36261] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9f570025-7ca0-cdf3-6f73-22764e481636/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-13 03:16:42,228 INFO [Listener at localhost.localdomain/36261] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9f570025-7ca0-cdf3-6f73-22764e481636/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-13 03:16:42,229 INFO [Listener at localhost.localdomain/36261] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9f570025-7ca0-cdf3-6f73-22764e481636/nfs.dump.dir in system properties and HBase conf 2023-07-13 03:16:42,229 INFO [Listener at localhost.localdomain/36261] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9f570025-7ca0-cdf3-6f73-22764e481636/java.io.tmpdir in system properties and HBase conf 2023-07-13 03:16:42,229 INFO [Listener at localhost.localdomain/36261] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9f570025-7ca0-cdf3-6f73-22764e481636/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-13 03:16:42,229 INFO [Listener at localhost.localdomain/36261] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9f570025-7ca0-cdf3-6f73-22764e481636/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-13 03:16:42,229 INFO [Listener at localhost.localdomain/36261] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9f570025-7ca0-cdf3-6f73-22764e481636/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-13 03:16:42,231 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-13 03:16:42,232 WARN [Listener at localhost.localdomain/36261] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-13 03:16:42,233 WARN [Listener at localhost.localdomain/36261] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-13 03:16:42,289 DEBUG [Listener at localhost.localdomain/36261-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient-0x1008454350d000a, quorum=127.0.0.1:56998, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null 2023-07-13 03:16:42,289 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(630): VerifyingRSGroupAdminClient-0x1008454350d000a, quorum=127.0.0.1:56998, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring 2023-07-13 03:16:42,337 WARN [Listener at localhost.localdomain/36261] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-13 03:16:42,340 INFO [Listener at localhost.localdomain/36261] log.Slf4jLog(67): jetty-6.1.26 2023-07-13 03:16:42,351 INFO [Listener at localhost.localdomain/36261] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9f570025-7ca0-cdf3-6f73-22764e481636/java.io.tmpdir/Jetty_localhost_localdomain_42155_hdfs____.2xv1lf/webapp 2023-07-13 03:16:42,469 INFO [Listener at localhost.localdomain/36261] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:42155 2023-07-13 03:16:42,472 WARN [Listener at localhost.localdomain/36261] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-13 03:16:42,472 WARN [Listener at localhost.localdomain/36261] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-13 03:16:42,550 WARN [Listener at localhost.localdomain/40633] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-13 03:16:42,587 WARN [Listener at localhost.localdomain/40633] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-13 03:16:42,591 WARN [Listener at localhost.localdomain/40633] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-13 03:16:42,593 INFO [Listener at localhost.localdomain/40633] log.Slf4jLog(67): jetty-6.1.26 2023-07-13 03:16:42,599 INFO [Listener at localhost.localdomain/40633] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9f570025-7ca0-cdf3-6f73-22764e481636/java.io.tmpdir/Jetty_localhost_46471_datanode____.165o0d/webapp 2023-07-13 03:16:42,688 INFO [Listener at localhost.localdomain/40633] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:46471 2023-07-13 03:16:42,697 WARN [Listener at localhost.localdomain/42993] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-13 03:16:42,717 WARN [Listener at localhost.localdomain/42993] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-13 03:16:42,719 WARN [Listener at localhost.localdomain/42993] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-13 03:16:42,720 INFO [Listener at localhost.localdomain/42993] log.Slf4jLog(67): jetty-6.1.26 2023-07-13 03:16:42,724 INFO [Listener at localhost.localdomain/42993] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9f570025-7ca0-cdf3-6f73-22764e481636/java.io.tmpdir/Jetty_localhost_36043_datanode____y5qdzn/webapp 2023-07-13 03:16:42,793 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xabb7a4795aaf1210: Processing first storage report for DS-07c3d157-1b5a-4d9d-b63e-7a40ce4cc995 from datanode b77c83ae-9641-4299-80d1-81ced1e96813 2023-07-13 03:16:42,794 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xabb7a4795aaf1210: from storage DS-07c3d157-1b5a-4d9d-b63e-7a40ce4cc995 node DatanodeRegistration(127.0.0.1:38067, datanodeUuid=b77c83ae-9641-4299-80d1-81ced1e96813, infoPort=36235, infoSecurePort=0, ipcPort=42993, storageInfo=lv=-57;cid=testClusterID;nsid=1053169063;c=1689218202298), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-13 03:16:42,794 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xabb7a4795aaf1210: Processing first storage report for DS-371963d4-8537-406f-baea-4f5a6f7dd9fb from datanode b77c83ae-9641-4299-80d1-81ced1e96813 2023-07-13 03:16:42,794 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xabb7a4795aaf1210: from storage DS-371963d4-8537-406f-baea-4f5a6f7dd9fb node DatanodeRegistration(127.0.0.1:38067, datanodeUuid=b77c83ae-9641-4299-80d1-81ced1e96813, infoPort=36235, infoSecurePort=0, ipcPort=42993, storageInfo=lv=-57;cid=testClusterID;nsid=1053169063;c=1689218202298), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-13 03:16:42,826 INFO [Listener at localhost.localdomain/42993] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:36043 2023-07-13 03:16:42,843 WARN [Listener at localhost.localdomain/40155] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-13 03:16:42,948 WARN [Listener at localhost.localdomain/40155] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-13 03:16:42,955 WARN [Listener at localhost.localdomain/40155] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-13 03:16:42,957 INFO [Listener at localhost.localdomain/40155] log.Slf4jLog(67): jetty-6.1.26 2023-07-13 03:16:42,964 INFO [Listener at localhost.localdomain/40155] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9f570025-7ca0-cdf3-6f73-22764e481636/java.io.tmpdir/Jetty_localhost_39575_datanode____ro6710/webapp 2023-07-13 03:16:43,133 INFO [Listener at localhost.localdomain/40155] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:39575 2023-07-13 03:16:43,134 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x9b36efd481d0def3: Processing first storage report for DS-e2a42619-dd6a-4639-b2b8-070717de9c2f from datanode ccdcbc3d-adc9-4167-9ab3-4e0d9618f958 2023-07-13 03:16:43,134 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x9b36efd481d0def3: from storage DS-e2a42619-dd6a-4639-b2b8-070717de9c2f node DatanodeRegistration(127.0.0.1:45819, datanodeUuid=ccdcbc3d-adc9-4167-9ab3-4e0d9618f958, infoPort=40195, infoSecurePort=0, ipcPort=40155, storageInfo=lv=-57;cid=testClusterID;nsid=1053169063;c=1689218202298), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-13 03:16:43,134 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x9b36efd481d0def3: Processing first storage report for DS-6e0b1076-7562-41ec-b79a-fdda76132165 from datanode ccdcbc3d-adc9-4167-9ab3-4e0d9618f958 2023-07-13 03:16:43,134 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x9b36efd481d0def3: from storage DS-6e0b1076-7562-41ec-b79a-fdda76132165 node DatanodeRegistration(127.0.0.1:45819, datanodeUuid=ccdcbc3d-adc9-4167-9ab3-4e0d9618f958, infoPort=40195, infoSecurePort=0, ipcPort=40155, storageInfo=lv=-57;cid=testClusterID;nsid=1053169063;c=1689218202298), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-13 03:16:43,161 WARN [Listener at localhost.localdomain/44085] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-13 03:16:43,301 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xf4ee1e14f9a3e7e9: Processing first storage report for DS-42f78698-4b70-4281-87db-568fcefb78de from datanode 5b073fb3-8b5d-4ba4-ac60-50ff0a5d0725 2023-07-13 03:16:43,301 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xf4ee1e14f9a3e7e9: from storage DS-42f78698-4b70-4281-87db-568fcefb78de node DatanodeRegistration(127.0.0.1:40055, datanodeUuid=5b073fb3-8b5d-4ba4-ac60-50ff0a5d0725, infoPort=33297, infoSecurePort=0, ipcPort=44085, storageInfo=lv=-57;cid=testClusterID;nsid=1053169063;c=1689218202298), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-13 03:16:43,301 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xf4ee1e14f9a3e7e9: Processing first storage report for DS-02058e94-31eb-41fc-9782-2572a211a1da from datanode 5b073fb3-8b5d-4ba4-ac60-50ff0a5d0725 2023-07-13 03:16:43,301 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xf4ee1e14f9a3e7e9: from storage DS-02058e94-31eb-41fc-9782-2572a211a1da node DatanodeRegistration(127.0.0.1:40055, datanodeUuid=5b073fb3-8b5d-4ba4-ac60-50ff0a5d0725, infoPort=33297, infoSecurePort=0, ipcPort=44085, storageInfo=lv=-57;cid=testClusterID;nsid=1053169063;c=1689218202298), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-13 03:16:43,321 DEBUG [Listener at localhost.localdomain/44085] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9f570025-7ca0-cdf3-6f73-22764e481636 2023-07-13 03:16:43,325 INFO [Listener at localhost.localdomain/44085] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9f570025-7ca0-cdf3-6f73-22764e481636/cluster_253178af-242e-c4f7-9f5d-96fe9741eada/zookeeper_0, clientPort=62986, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9f570025-7ca0-cdf3-6f73-22764e481636/cluster_253178af-242e-c4f7-9f5d-96fe9741eada/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9f570025-7ca0-cdf3-6f73-22764e481636/cluster_253178af-242e-c4f7-9f5d-96fe9741eada/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-13 03:16:43,327 INFO [Listener at localhost.localdomain/44085] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=62986 2023-07-13 03:16:43,328 INFO [Listener at localhost.localdomain/44085] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 03:16:43,329 INFO [Listener at localhost.localdomain/44085] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 03:16:43,360 INFO [Listener at localhost.localdomain/44085] util.FSUtils(471): Created version file at hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5 with version=8 2023-07-13 03:16:43,361 INFO [Listener at localhost.localdomain/44085] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/hbase-staging 2023-07-13 03:16:43,362 DEBUG [Listener at localhost.localdomain/44085] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-13 03:16:43,362 DEBUG [Listener at localhost.localdomain/44085] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-13 03:16:43,362 DEBUG [Listener at localhost.localdomain/44085] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-13 03:16:43,362 DEBUG [Listener at localhost.localdomain/44085] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-13 03:16:43,363 INFO [Listener at localhost.localdomain/44085] client.ConnectionUtils(127): master/jenkins-hbase20:0 server-side Connection retries=45 2023-07-13 03:16:43,364 INFO [Listener at localhost.localdomain/44085] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 03:16:43,364 INFO [Listener at localhost.localdomain/44085] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-13 03:16:43,364 INFO [Listener at localhost.localdomain/44085] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-13 03:16:43,364 INFO [Listener at localhost.localdomain/44085] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 03:16:43,364 INFO [Listener at localhost.localdomain/44085] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-13 03:16:43,364 INFO [Listener at localhost.localdomain/44085] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-13 03:16:43,372 INFO [Listener at localhost.localdomain/44085] ipc.NettyRpcServer(120): Bind to /148.251.75.209:39355 2023-07-13 03:16:43,372 INFO [Listener at localhost.localdomain/44085] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 03:16:43,374 INFO [Listener at localhost.localdomain/44085] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 03:16:43,375 INFO [Listener at localhost.localdomain/44085] zookeeper.RecoverableZooKeeper(93): Process identifier=master:39355 connecting to ZooKeeper ensemble=127.0.0.1:62986 2023-07-13 03:16:43,381 DEBUG [Listener at localhost.localdomain/44085-EventThread] zookeeper.ZKWatcher(600): master:393550x0, quorum=127.0.0.1:62986, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-13 03:16:43,383 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:39355-0x1008454bb3b0000 connected 2023-07-13 03:16:43,413 DEBUG [Listener at localhost.localdomain/44085] zookeeper.ZKUtil(164): master:39355-0x1008454bb3b0000, quorum=127.0.0.1:62986, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-13 03:16:43,419 DEBUG [Listener at localhost.localdomain/44085] zookeeper.ZKUtil(164): master:39355-0x1008454bb3b0000, quorum=127.0.0.1:62986, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 03:16:43,419 DEBUG [Listener at localhost.localdomain/44085] zookeeper.ZKUtil(164): master:39355-0x1008454bb3b0000, quorum=127.0.0.1:62986, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-13 03:16:43,430 DEBUG [Listener at localhost.localdomain/44085] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=39355 2023-07-13 03:16:43,431 DEBUG [Listener at localhost.localdomain/44085] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=39355 2023-07-13 03:16:43,433 DEBUG [Listener at localhost.localdomain/44085] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=39355 2023-07-13 03:16:43,434 DEBUG [Listener at localhost.localdomain/44085] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=39355 2023-07-13 03:16:43,434 DEBUG [Listener at localhost.localdomain/44085] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=39355 2023-07-13 03:16:43,437 INFO [Listener at localhost.localdomain/44085] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-13 03:16:43,437 INFO [Listener at localhost.localdomain/44085] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-13 03:16:43,438 INFO [Listener at localhost.localdomain/44085] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-13 03:16:43,438 INFO [Listener at localhost.localdomain/44085] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-13 03:16:43,438 INFO [Listener at localhost.localdomain/44085] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-13 03:16:43,439 INFO [Listener at localhost.localdomain/44085] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-13 03:16:43,439 INFO [Listener at localhost.localdomain/44085] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-13 03:16:43,439 INFO [Listener at localhost.localdomain/44085] http.HttpServer(1146): Jetty bound to port 42411 2023-07-13 03:16:43,440 INFO [Listener at localhost.localdomain/44085] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-13 03:16:43,449 INFO [Listener at localhost.localdomain/44085] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 03:16:43,450 INFO [Listener at localhost.localdomain/44085] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@3f9a90e1{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9f570025-7ca0-cdf3-6f73-22764e481636/hadoop.log.dir/,AVAILABLE} 2023-07-13 03:16:43,450 INFO [Listener at localhost.localdomain/44085] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 03:16:43,450 INFO [Listener at localhost.localdomain/44085] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@640210ce{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-13 03:16:43,556 INFO [Listener at localhost.localdomain/44085] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-13 03:16:43,557 INFO [Listener at localhost.localdomain/44085] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-13 03:16:43,558 INFO [Listener at localhost.localdomain/44085] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-13 03:16:43,558 INFO [Listener at localhost.localdomain/44085] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-13 03:16:43,560 INFO [Listener at localhost.localdomain/44085] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 03:16:43,561 INFO [Listener at localhost.localdomain/44085] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@40a68960{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9f570025-7ca0-cdf3-6f73-22764e481636/java.io.tmpdir/jetty-0_0_0_0-42411-hbase-server-2_4_18-SNAPSHOT_jar-_-any-7211377433057406775/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-13 03:16:43,563 INFO [Listener at localhost.localdomain/44085] server.AbstractConnector(333): Started ServerConnector@3bfe2510{HTTP/1.1, (http/1.1)}{0.0.0.0:42411} 2023-07-13 03:16:43,563 INFO [Listener at localhost.localdomain/44085] server.Server(415): Started @39095ms 2023-07-13 03:16:43,563 INFO [Listener at localhost.localdomain/44085] master.HMaster(444): hbase.rootdir=hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5, hbase.cluster.distributed=false 2023-07-13 03:16:43,581 INFO [Listener at localhost.localdomain/44085] client.ConnectionUtils(127): regionserver/jenkins-hbase20:0 server-side Connection retries=45 2023-07-13 03:16:43,581 INFO [Listener at localhost.localdomain/44085] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 03:16:43,582 INFO [Listener at localhost.localdomain/44085] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-13 03:16:43,582 INFO [Listener at localhost.localdomain/44085] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-13 03:16:43,582 INFO [Listener at localhost.localdomain/44085] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 03:16:43,582 INFO [Listener at localhost.localdomain/44085] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-13 03:16:43,582 INFO [Listener at localhost.localdomain/44085] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-13 03:16:43,584 INFO [Listener at localhost.localdomain/44085] ipc.NettyRpcServer(120): Bind to /148.251.75.209:43619 2023-07-13 03:16:43,584 INFO [Listener at localhost.localdomain/44085] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-13 03:16:43,598 DEBUG [Listener at localhost.localdomain/44085] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-13 03:16:43,599 INFO [Listener at localhost.localdomain/44085] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 03:16:43,600 INFO [Listener at localhost.localdomain/44085] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 03:16:43,601 INFO [Listener at localhost.localdomain/44085] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:43619 connecting to ZooKeeper ensemble=127.0.0.1:62986 2023-07-13 03:16:43,609 DEBUG [Listener at localhost.localdomain/44085-EventThread] zookeeper.ZKWatcher(600): regionserver:436190x0, quorum=127.0.0.1:62986, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-13 03:16:43,611 DEBUG [Listener at localhost.localdomain/44085] zookeeper.ZKUtil(164): regionserver:436190x0, quorum=127.0.0.1:62986, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-13 03:16:43,612 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:43619-0x1008454bb3b0001 connected 2023-07-13 03:16:43,612 DEBUG [Listener at localhost.localdomain/44085] zookeeper.ZKUtil(164): regionserver:43619-0x1008454bb3b0001, quorum=127.0.0.1:62986, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 03:16:43,613 DEBUG [Listener at localhost.localdomain/44085] zookeeper.ZKUtil(164): regionserver:43619-0x1008454bb3b0001, quorum=127.0.0.1:62986, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-13 03:16:43,623 DEBUG [Listener at localhost.localdomain/44085] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=43619 2023-07-13 03:16:43,623 DEBUG [Listener at localhost.localdomain/44085] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=43619 2023-07-13 03:16:43,625 DEBUG [Listener at localhost.localdomain/44085] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=43619 2023-07-13 03:16:43,631 DEBUG [Listener at localhost.localdomain/44085] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=43619 2023-07-13 03:16:43,631 DEBUG [Listener at localhost.localdomain/44085] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=43619 2023-07-13 03:16:43,636 INFO [Listener at localhost.localdomain/44085] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-13 03:16:43,636 INFO [Listener at localhost.localdomain/44085] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-13 03:16:43,636 INFO [Listener at localhost.localdomain/44085] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-13 03:16:43,637 INFO [Listener at localhost.localdomain/44085] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-13 03:16:43,637 INFO [Listener at localhost.localdomain/44085] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-13 03:16:43,637 INFO [Listener at localhost.localdomain/44085] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-13 03:16:43,638 INFO [Listener at localhost.localdomain/44085] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-13 03:16:43,641 INFO [Listener at localhost.localdomain/44085] http.HttpServer(1146): Jetty bound to port 36221 2023-07-13 03:16:43,641 INFO [Listener at localhost.localdomain/44085] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-13 03:16:43,655 INFO [Listener at localhost.localdomain/44085] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 03:16:43,656 INFO [Listener at localhost.localdomain/44085] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@78931022{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9f570025-7ca0-cdf3-6f73-22764e481636/hadoop.log.dir/,AVAILABLE} 2023-07-13 03:16:43,657 INFO [Listener at localhost.localdomain/44085] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 03:16:43,657 INFO [Listener at localhost.localdomain/44085] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@f9e9e62{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-13 03:16:43,801 INFO [Listener at localhost.localdomain/44085] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-13 03:16:43,802 INFO [Listener at localhost.localdomain/44085] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-13 03:16:43,802 INFO [Listener at localhost.localdomain/44085] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-13 03:16:43,802 INFO [Listener at localhost.localdomain/44085] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-13 03:16:43,828 INFO [Listener at localhost.localdomain/44085] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 03:16:43,829 INFO [Listener at localhost.localdomain/44085] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@368a5eaa{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9f570025-7ca0-cdf3-6f73-22764e481636/java.io.tmpdir/jetty-0_0_0_0-36221-hbase-server-2_4_18-SNAPSHOT_jar-_-any-1385487933143759797/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-13 03:16:43,831 INFO [Listener at localhost.localdomain/44085] server.AbstractConnector(333): Started ServerConnector@7025d83c{HTTP/1.1, (http/1.1)}{0.0.0.0:36221} 2023-07-13 03:16:43,832 INFO [Listener at localhost.localdomain/44085] server.Server(415): Started @39364ms 2023-07-13 03:16:43,851 INFO [Listener at localhost.localdomain/44085] client.ConnectionUtils(127): regionserver/jenkins-hbase20:0 server-side Connection retries=45 2023-07-13 03:16:43,851 INFO [Listener at localhost.localdomain/44085] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 03:16:43,852 INFO [Listener at localhost.localdomain/44085] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-13 03:16:43,852 INFO [Listener at localhost.localdomain/44085] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-13 03:16:43,852 INFO [Listener at localhost.localdomain/44085] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 03:16:43,852 INFO [Listener at localhost.localdomain/44085] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-13 03:16:43,852 INFO [Listener at localhost.localdomain/44085] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-13 03:16:43,853 INFO [Listener at localhost.localdomain/44085] ipc.NettyRpcServer(120): Bind to /148.251.75.209:34063 2023-07-13 03:16:43,854 INFO [Listener at localhost.localdomain/44085] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-13 03:16:43,862 DEBUG [Listener at localhost.localdomain/44085] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-13 03:16:43,863 INFO [Listener at localhost.localdomain/44085] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 03:16:43,864 INFO [Listener at localhost.localdomain/44085] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 03:16:43,866 INFO [Listener at localhost.localdomain/44085] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:34063 connecting to ZooKeeper ensemble=127.0.0.1:62986 2023-07-13 03:16:43,875 DEBUG [Listener at localhost.localdomain/44085-EventThread] zookeeper.ZKWatcher(600): regionserver:340630x0, quorum=127.0.0.1:62986, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-13 03:16:43,877 DEBUG [Listener at localhost.localdomain/44085] zookeeper.ZKUtil(164): regionserver:340630x0, quorum=127.0.0.1:62986, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-13 03:16:43,878 DEBUG [Listener at localhost.localdomain/44085] zookeeper.ZKUtil(164): regionserver:340630x0, quorum=127.0.0.1:62986, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 03:16:43,880 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:34063-0x1008454bb3b0002 connected 2023-07-13 03:16:43,880 DEBUG [Listener at localhost.localdomain/44085] zookeeper.ZKUtil(164): regionserver:34063-0x1008454bb3b0002, quorum=127.0.0.1:62986, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-13 03:16:43,891 DEBUG [Listener at localhost.localdomain/44085] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=34063 2023-07-13 03:16:43,891 DEBUG [Listener at localhost.localdomain/44085] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=34063 2023-07-13 03:16:43,892 DEBUG [Listener at localhost.localdomain/44085] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=34063 2023-07-13 03:16:43,892 DEBUG [Listener at localhost.localdomain/44085] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=34063 2023-07-13 03:16:43,892 DEBUG [Listener at localhost.localdomain/44085] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=34063 2023-07-13 03:16:43,895 INFO [Listener at localhost.localdomain/44085] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-13 03:16:43,895 INFO [Listener at localhost.localdomain/44085] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-13 03:16:43,896 INFO [Listener at localhost.localdomain/44085] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-13 03:16:43,896 INFO [Listener at localhost.localdomain/44085] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-13 03:16:43,897 INFO [Listener at localhost.localdomain/44085] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-13 03:16:43,897 INFO [Listener at localhost.localdomain/44085] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-13 03:16:43,897 INFO [Listener at localhost.localdomain/44085] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-13 03:16:43,898 INFO [Listener at localhost.localdomain/44085] http.HttpServer(1146): Jetty bound to port 42201 2023-07-13 03:16:43,898 INFO [Listener at localhost.localdomain/44085] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-13 03:16:43,911 INFO [Listener at localhost.localdomain/44085] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 03:16:43,911 INFO [Listener at localhost.localdomain/44085] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@184ce85b{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9f570025-7ca0-cdf3-6f73-22764e481636/hadoop.log.dir/,AVAILABLE} 2023-07-13 03:16:43,912 INFO [Listener at localhost.localdomain/44085] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 03:16:43,912 INFO [Listener at localhost.localdomain/44085] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@3da9a951{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-13 03:16:44,041 INFO [Listener at localhost.localdomain/44085] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-13 03:16:44,043 INFO [Listener at localhost.localdomain/44085] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-13 03:16:44,043 INFO [Listener at localhost.localdomain/44085] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-13 03:16:44,043 INFO [Listener at localhost.localdomain/44085] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-13 03:16:44,044 INFO [Listener at localhost.localdomain/44085] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 03:16:44,045 INFO [Listener at localhost.localdomain/44085] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@325c7eeb{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9f570025-7ca0-cdf3-6f73-22764e481636/java.io.tmpdir/jetty-0_0_0_0-42201-hbase-server-2_4_18-SNAPSHOT_jar-_-any-55421767327488633/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-13 03:16:44,047 INFO [Listener at localhost.localdomain/44085] server.AbstractConnector(333): Started ServerConnector@731646f8{HTTP/1.1, (http/1.1)}{0.0.0.0:42201} 2023-07-13 03:16:44,047 INFO [Listener at localhost.localdomain/44085] server.Server(415): Started @39579ms 2023-07-13 03:16:44,060 INFO [Listener at localhost.localdomain/44085] client.ConnectionUtils(127): regionserver/jenkins-hbase20:0 server-side Connection retries=45 2023-07-13 03:16:44,060 INFO [Listener at localhost.localdomain/44085] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 03:16:44,060 INFO [Listener at localhost.localdomain/44085] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-13 03:16:44,061 INFO [Listener at localhost.localdomain/44085] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-13 03:16:44,061 INFO [Listener at localhost.localdomain/44085] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 03:16:44,061 INFO [Listener at localhost.localdomain/44085] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-13 03:16:44,061 INFO [Listener at localhost.localdomain/44085] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-13 03:16:44,062 INFO [Listener at localhost.localdomain/44085] ipc.NettyRpcServer(120): Bind to /148.251.75.209:38781 2023-07-13 03:16:44,062 INFO [Listener at localhost.localdomain/44085] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-13 03:16:44,070 DEBUG [Listener at localhost.localdomain/44085] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-13 03:16:44,071 INFO [Listener at localhost.localdomain/44085] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 03:16:44,073 INFO [Listener at localhost.localdomain/44085] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 03:16:44,074 INFO [Listener at localhost.localdomain/44085] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:38781 connecting to ZooKeeper ensemble=127.0.0.1:62986 2023-07-13 03:16:44,098 DEBUG [Listener at localhost.localdomain/44085-EventThread] zookeeper.ZKWatcher(600): regionserver:387810x0, quorum=127.0.0.1:62986, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-13 03:16:44,100 DEBUG [Listener at localhost.localdomain/44085] zookeeper.ZKUtil(164): regionserver:387810x0, quorum=127.0.0.1:62986, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-13 03:16:44,101 DEBUG [Listener at localhost.localdomain/44085] zookeeper.ZKUtil(164): regionserver:387810x0, quorum=127.0.0.1:62986, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 03:16:44,103 DEBUG [Listener at localhost.localdomain/44085] zookeeper.ZKUtil(164): regionserver:387810x0, quorum=127.0.0.1:62986, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-13 03:16:44,107 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:38781-0x1008454bb3b0003 connected 2023-07-13 03:16:44,110 DEBUG [Listener at localhost.localdomain/44085] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=38781 2023-07-13 03:16:44,114 DEBUG [Listener at localhost.localdomain/44085] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=38781 2023-07-13 03:16:44,122 DEBUG [Listener at localhost.localdomain/44085] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=38781 2023-07-13 03:16:44,127 DEBUG [Listener at localhost.localdomain/44085] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=38781 2023-07-13 03:16:44,130 DEBUG [Listener at localhost.localdomain/44085] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=38781 2023-07-13 03:16:44,133 INFO [Listener at localhost.localdomain/44085] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-13 03:16:44,133 INFO [Listener at localhost.localdomain/44085] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-13 03:16:44,133 INFO [Listener at localhost.localdomain/44085] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-13 03:16:44,133 INFO [Listener at localhost.localdomain/44085] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-13 03:16:44,133 INFO [Listener at localhost.localdomain/44085] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-13 03:16:44,134 INFO [Listener at localhost.localdomain/44085] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-13 03:16:44,134 INFO [Listener at localhost.localdomain/44085] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-13 03:16:44,134 INFO [Listener at localhost.localdomain/44085] http.HttpServer(1146): Jetty bound to port 41937 2023-07-13 03:16:44,135 INFO [Listener at localhost.localdomain/44085] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-13 03:16:44,143 INFO [Listener at localhost.localdomain/44085] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 03:16:44,143 INFO [Listener at localhost.localdomain/44085] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6ffd4a0{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9f570025-7ca0-cdf3-6f73-22764e481636/hadoop.log.dir/,AVAILABLE} 2023-07-13 03:16:44,143 INFO [Listener at localhost.localdomain/44085] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 03:16:44,144 INFO [Listener at localhost.localdomain/44085] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@79db073d{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-13 03:16:44,246 INFO [Listener at localhost.localdomain/44085] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-13 03:16:44,247 INFO [Listener at localhost.localdomain/44085] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-13 03:16:44,248 INFO [Listener at localhost.localdomain/44085] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-13 03:16:44,248 INFO [Listener at localhost.localdomain/44085] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-13 03:16:44,254 INFO [Listener at localhost.localdomain/44085] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 03:16:44,254 INFO [Listener at localhost.localdomain/44085] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@398de0cc{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9f570025-7ca0-cdf3-6f73-22764e481636/java.io.tmpdir/jetty-0_0_0_0-41937-hbase-server-2_4_18-SNAPSHOT_jar-_-any-5772091115751408738/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-13 03:16:44,256 INFO [Listener at localhost.localdomain/44085] server.AbstractConnector(333): Started ServerConnector@48a49787{HTTP/1.1, (http/1.1)}{0.0.0.0:41937} 2023-07-13 03:16:44,256 INFO [Listener at localhost.localdomain/44085] server.Server(415): Started @39788ms 2023-07-13 03:16:44,259 INFO [master/jenkins-hbase20:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-13 03:16:44,266 INFO [master/jenkins-hbase20:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@66207073{HTTP/1.1, (http/1.1)}{0.0.0.0:41237} 2023-07-13 03:16:44,266 INFO [master/jenkins-hbase20:0:becomeActiveMaster] server.Server(415): Started @39798ms 2023-07-13 03:16:44,266 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase20.apache.org,39355,1689218203363 2023-07-13 03:16:44,267 DEBUG [Listener at localhost.localdomain/44085-EventThread] zookeeper.ZKWatcher(600): master:39355-0x1008454bb3b0000, quorum=127.0.0.1:62986, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-13 03:16:44,268 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:39355-0x1008454bb3b0000, quorum=127.0.0.1:62986, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase20.apache.org,39355,1689218203363 2023-07-13 03:16:44,269 DEBUG [Listener at localhost.localdomain/44085-EventThread] zookeeper.ZKWatcher(600): regionserver:34063-0x1008454bb3b0002, quorum=127.0.0.1:62986, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-13 03:16:44,269 DEBUG [Listener at localhost.localdomain/44085-EventThread] zookeeper.ZKWatcher(600): regionserver:38781-0x1008454bb3b0003, quorum=127.0.0.1:62986, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-13 03:16:44,269 DEBUG [Listener at localhost.localdomain/44085-EventThread] zookeeper.ZKWatcher(600): regionserver:43619-0x1008454bb3b0001, quorum=127.0.0.1:62986, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-13 03:16:44,269 DEBUG [Listener at localhost.localdomain/44085-EventThread] zookeeper.ZKWatcher(600): master:39355-0x1008454bb3b0000, quorum=127.0.0.1:62986, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-13 03:16:44,271 DEBUG [Listener at localhost.localdomain/44085-EventThread] zookeeper.ZKWatcher(600): master:39355-0x1008454bb3b0000, quorum=127.0.0.1:62986, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 03:16:44,272 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:39355-0x1008454bb3b0000, quorum=127.0.0.1:62986, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-13 03:16:44,273 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:39355-0x1008454bb3b0000, quorum=127.0.0.1:62986, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-13 03:16:44,273 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase20.apache.org,39355,1689218203363 from backup master directory 2023-07-13 03:16:44,278 DEBUG [Listener at localhost.localdomain/44085-EventThread] zookeeper.ZKWatcher(600): master:39355-0x1008454bb3b0000, quorum=127.0.0.1:62986, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase20.apache.org,39355,1689218203363 2023-07-13 03:16:44,279 WARN [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-13 03:16:44,279 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase20.apache.org,39355,1689218203363 2023-07-13 03:16:44,279 DEBUG [Listener at localhost.localdomain/44085-EventThread] zookeeper.ZKWatcher(600): master:39355-0x1008454bb3b0000, quorum=127.0.0.1:62986, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-13 03:16:44,301 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/hbase.id with ID: 2833d819-d07e-4159-94d4-3a4d273e4c78 2023-07-13 03:16:44,313 INFO [master/jenkins-hbase20:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 03:16:44,315 DEBUG [Listener at localhost.localdomain/44085-EventThread] zookeeper.ZKWatcher(600): master:39355-0x1008454bb3b0000, quorum=127.0.0.1:62986, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 03:16:44,336 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x3aa549a3 to 127.0.0.1:62986 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-13 03:16:44,342 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@47e1650b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-13 03:16:44,342 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-13 03:16:44,343 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-13 03:16:44,343 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-13 03:16:44,345 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/MasterData/data/master/store-tmp 2023-07-13 03:16:44,764 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 03:16:44,765 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-13 03:16:44,765 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-13 03:16:44,765 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-13 03:16:44,765 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-13 03:16:44,765 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-13 03:16:44,765 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-13 03:16:44,765 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-13 03:16:44,766 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/MasterData/WALs/jenkins-hbase20.apache.org,39355,1689218203363 2023-07-13 03:16:44,769 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C39355%2C1689218203363, suffix=, logDir=hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/MasterData/WALs/jenkins-hbase20.apache.org,39355,1689218203363, archiveDir=hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/MasterData/oldWALs, maxLogs=10 2023-07-13 03:16:44,789 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45819,DS-e2a42619-dd6a-4639-b2b8-070717de9c2f,DISK] 2023-07-13 03:16:44,789 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38067,DS-07c3d157-1b5a-4d9d-b63e-7a40ce4cc995,DISK] 2023-07-13 03:16:44,789 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40055,DS-42f78698-4b70-4281-87db-568fcefb78de,DISK] 2023-07-13 03:16:44,801 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/MasterData/WALs/jenkins-hbase20.apache.org,39355,1689218203363/jenkins-hbase20.apache.org%2C39355%2C1689218203363.1689218204769 2023-07-13 03:16:44,808 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45819,DS-e2a42619-dd6a-4639-b2b8-070717de9c2f,DISK], DatanodeInfoWithStorage[127.0.0.1:38067,DS-07c3d157-1b5a-4d9d-b63e-7a40ce4cc995,DISK], DatanodeInfoWithStorage[127.0.0.1:40055,DS-42f78698-4b70-4281-87db-568fcefb78de,DISK]] 2023-07-13 03:16:44,809 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-13 03:16:44,809 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 03:16:44,809 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-13 03:16:44,809 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-13 03:16:44,813 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-13 03:16:44,815 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-13 03:16:44,816 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-13 03:16:44,816 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 03:16:44,817 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-13 03:16:44,818 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-13 03:16:44,821 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-13 03:16:44,822 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 03:16:44,823 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11494802400, jitterRate=0.07053689658641815}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 03:16:44,823 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-13 03:16:44,823 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-13 03:16:44,824 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-13 03:16:44,824 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-13 03:16:44,824 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-13 03:16:44,825 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-13 03:16:44,825 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-07-13 03:16:44,825 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-13 03:16:44,827 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-13 03:16:44,828 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-13 03:16:44,829 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39355-0x1008454bb3b0000, quorum=127.0.0.1:62986, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-13 03:16:44,829 INFO [master/jenkins-hbase20:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-13 03:16:44,829 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39355-0x1008454bb3b0000, quorum=127.0.0.1:62986, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-13 03:16:44,833 DEBUG [Listener at localhost.localdomain/44085-EventThread] zookeeper.ZKWatcher(600): master:39355-0x1008454bb3b0000, quorum=127.0.0.1:62986, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 03:16:44,833 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39355-0x1008454bb3b0000, quorum=127.0.0.1:62986, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-13 03:16:44,834 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39355-0x1008454bb3b0000, quorum=127.0.0.1:62986, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-13 03:16:44,834 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39355-0x1008454bb3b0000, quorum=127.0.0.1:62986, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-13 03:16:44,835 DEBUG [Listener at localhost.localdomain/44085-EventThread] zookeeper.ZKWatcher(600): regionserver:34063-0x1008454bb3b0002, quorum=127.0.0.1:62986, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-13 03:16:44,835 DEBUG [Listener at localhost.localdomain/44085-EventThread] zookeeper.ZKWatcher(600): master:39355-0x1008454bb3b0000, quorum=127.0.0.1:62986, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-13 03:16:44,835 DEBUG [Listener at localhost.localdomain/44085-EventThread] zookeeper.ZKWatcher(600): master:39355-0x1008454bb3b0000, quorum=127.0.0.1:62986, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 03:16:44,835 DEBUG [Listener at localhost.localdomain/44085-EventThread] zookeeper.ZKWatcher(600): regionserver:38781-0x1008454bb3b0003, quorum=127.0.0.1:62986, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-13 03:16:44,836 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase20.apache.org,39355,1689218203363, sessionid=0x1008454bb3b0000, setting cluster-up flag (Was=false) 2023-07-13 03:16:44,840 DEBUG [Listener at localhost.localdomain/44085-EventThread] zookeeper.ZKWatcher(600): master:39355-0x1008454bb3b0000, quorum=127.0.0.1:62986, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 03:16:44,843 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-13 03:16:44,843 DEBUG [Listener at localhost.localdomain/44085-EventThread] zookeeper.ZKWatcher(600): regionserver:43619-0x1008454bb3b0001, quorum=127.0.0.1:62986, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-13 03:16:44,844 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase20.apache.org,39355,1689218203363 2023-07-13 03:16:44,847 DEBUG [Listener at localhost.localdomain/44085-EventThread] zookeeper.ZKWatcher(600): master:39355-0x1008454bb3b0000, quorum=127.0.0.1:62986, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 03:16:44,851 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-13 03:16:44,852 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase20.apache.org,39355,1689218203363 2023-07-13 03:16:44,853 WARN [master/jenkins-hbase20:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/.hbase-snapshot/.tmp 2023-07-13 03:16:44,866 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-13 03:16:44,866 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-13 03:16:44,875 INFO [master/jenkins-hbase20:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-13 03:16:44,875 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,39355,1689218203363] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-13 03:16:44,876 INFO [master/jenkins-hbase20:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-13 03:16:44,876 INFO [master/jenkins-hbase20:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver loaded, priority=536870913. 2023-07-13 03:16:44,878 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-13 03:16:44,903 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-13 03:16:44,903 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-13 03:16:44,904 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-13 03:16:44,904 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-13 03:16:44,904 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-07-13 03:16:44,904 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-07-13 03:16:44,904 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-07-13 03:16:44,904 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-07-13 03:16:44,907 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase20:0, corePoolSize=10, maxPoolSize=10 2023-07-13 03:16:44,907 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:44,907 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-07-13 03:16:44,907 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:44,940 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-13 03:16:44,940 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-13 03:16:44,944 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-13 03:16:44,945 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689218234945 2023-07-13 03:16:44,946 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-13 03:16:44,946 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-13 03:16:44,946 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-13 03:16:44,946 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-13 03:16:44,946 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-13 03:16:44,946 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-13 03:16:44,956 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:44,959 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-13 03:16:44,959 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-13 03:16:44,959 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-13 03:16:44,966 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-13 03:16:44,966 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-13 03:16:44,967 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1689218204966,5,FailOnTimeoutGroup] 2023-07-13 03:16:44,967 INFO [RS:2;jenkins-hbase20:38781] regionserver.HRegionServer(951): ClusterId : 2833d819-d07e-4159-94d4-3a4d273e4c78 2023-07-13 03:16:44,991 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1689218204971,5,FailOnTimeoutGroup] 2023-07-13 03:16:44,992 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:44,993 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-13 03:16:44,993 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:44,993 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:44,992 INFO [RS:1;jenkins-hbase20:34063] regionserver.HRegionServer(951): ClusterId : 2833d819-d07e-4159-94d4-3a4d273e4c78 2023-07-13 03:16:44,993 DEBUG [RS:2;jenkins-hbase20:38781] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-13 03:16:44,995 DEBUG [RS:1;jenkins-hbase20:34063] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-13 03:16:44,995 INFO [RS:0;jenkins-hbase20:43619] regionserver.HRegionServer(951): ClusterId : 2833d819-d07e-4159-94d4-3a4d273e4c78 2023-07-13 03:16:44,998 DEBUG [RS:2;jenkins-hbase20:38781] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-13 03:16:44,998 DEBUG [RS:1;jenkins-hbase20:34063] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-13 03:16:45,003 DEBUG [RS:0;jenkins-hbase20:43619] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-13 03:16:44,998 DEBUG [RS:2;jenkins-hbase20:38781] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-13 03:16:45,003 DEBUG [RS:1;jenkins-hbase20:34063] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-13 03:16:45,013 DEBUG [RS:0;jenkins-hbase20:43619] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-13 03:16:45,014 DEBUG [RS:0;jenkins-hbase20:43619] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-13 03:16:45,014 DEBUG [RS:1;jenkins-hbase20:34063] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-13 03:16:45,039 DEBUG [RS:1;jenkins-hbase20:34063] zookeeper.ReadOnlyZKClient(139): Connect 0x7fc985a5 to 127.0.0.1:62986 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-13 03:16:45,041 DEBUG [RS:2;jenkins-hbase20:38781] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-13 03:16:45,043 DEBUG [RS:2;jenkins-hbase20:38781] zookeeper.ReadOnlyZKClient(139): Connect 0x5b9b1de8 to 127.0.0.1:62986 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-13 03:16:45,043 DEBUG [RS:0;jenkins-hbase20:43619] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-13 03:16:45,055 DEBUG [RS:0;jenkins-hbase20:43619] zookeeper.ReadOnlyZKClient(139): Connect 0x1a20b7da to 127.0.0.1:62986 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-13 03:16:45,061 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-13 03:16:45,062 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-13 03:16:45,062 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5 2023-07-13 03:16:45,084 DEBUG [RS:1;jenkins-hbase20:34063] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@710e642c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-13 03:16:45,084 DEBUG [RS:1;jenkins-hbase20:34063] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7531eed5, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-07-13 03:16:45,089 DEBUG [RS:0;jenkins-hbase20:43619] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@10d8dcaa, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-13 03:16:45,090 DEBUG [RS:0;jenkins-hbase20:43619] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@55da40d5, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-07-13 03:16:45,091 DEBUG [RS:2;jenkins-hbase20:38781] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@555cf8eb, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-13 03:16:45,091 DEBUG [RS:2;jenkins-hbase20:38781] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@51637698, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-07-13 03:16:45,094 DEBUG [RS:1;jenkins-hbase20:34063] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase20:34063 2023-07-13 03:16:45,095 INFO [RS:1;jenkins-hbase20:34063] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-13 03:16:45,095 INFO [RS:1;jenkins-hbase20:34063] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-13 03:16:45,095 DEBUG [RS:1;jenkins-hbase20:34063] regionserver.HRegionServer(1022): About to register with Master. 2023-07-13 03:16:45,095 INFO [RS:1;jenkins-hbase20:34063] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase20.apache.org,39355,1689218203363 with isa=jenkins-hbase20.apache.org/148.251.75.209:34063, startcode=1689218203851 2023-07-13 03:16:45,096 DEBUG [RS:1;jenkins-hbase20:34063] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-13 03:16:45,099 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:54443, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.5 (auth:SIMPLE), service=RegionServerStatusService 2023-07-13 03:16:45,101 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=39355] master.ServerManager(394): Registering regionserver=jenkins-hbase20.apache.org,34063,1689218203851 2023-07-13 03:16:45,102 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,39355,1689218203363] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-13 03:16:45,103 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,39355,1689218203363] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-13 03:16:45,103 DEBUG [RS:2;jenkins-hbase20:38781] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase20:38781 2023-07-13 03:16:45,103 DEBUG [RS:0;jenkins-hbase20:43619] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase20:43619 2023-07-13 03:16:45,103 INFO [RS:2;jenkins-hbase20:38781] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-13 03:16:45,103 INFO [RS:2;jenkins-hbase20:38781] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-13 03:16:45,103 INFO [RS:0;jenkins-hbase20:43619] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-13 03:16:45,103 INFO [RS:0;jenkins-hbase20:43619] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-13 03:16:45,103 DEBUG [RS:0;jenkins-hbase20:43619] regionserver.HRegionServer(1022): About to register with Master. 2023-07-13 03:16:45,103 DEBUG [RS:2;jenkins-hbase20:38781] regionserver.HRegionServer(1022): About to register with Master. 2023-07-13 03:16:45,103 DEBUG [RS:1;jenkins-hbase20:34063] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5 2023-07-13 03:16:45,103 DEBUG [RS:1;jenkins-hbase20:34063] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:40633 2023-07-13 03:16:45,103 DEBUG [RS:1;jenkins-hbase20:34063] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=42411 2023-07-13 03:16:45,104 DEBUG [Listener at localhost.localdomain/44085-EventThread] zookeeper.ZKWatcher(600): master:39355-0x1008454bb3b0000, quorum=127.0.0.1:62986, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 03:16:45,105 INFO [RS:0;jenkins-hbase20:43619] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase20.apache.org,39355,1689218203363 with isa=jenkins-hbase20.apache.org/148.251.75.209:43619, startcode=1689218203580 2023-07-13 03:16:45,105 DEBUG [RS:1;jenkins-hbase20:34063] zookeeper.ZKUtil(162): regionserver:34063-0x1008454bb3b0002, quorum=127.0.0.1:62986, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,34063,1689218203851 2023-07-13 03:16:45,105 DEBUG [RS:0;jenkins-hbase20:43619] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-13 03:16:45,105 WARN [RS:1;jenkins-hbase20:34063] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-13 03:16:45,105 INFO [RS:1;jenkins-hbase20:34063] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-13 03:16:45,105 DEBUG [RS:1;jenkins-hbase20:34063] regionserver.HRegionServer(1948): logDir=hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/WALs/jenkins-hbase20.apache.org,34063,1689218203851 2023-07-13 03:16:45,107 INFO [RS:2;jenkins-hbase20:38781] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase20.apache.org,39355,1689218203363 with isa=jenkins-hbase20.apache.org/148.251.75.209:38781, startcode=1689218204060 2023-07-13 03:16:45,107 DEBUG [RS:2;jenkins-hbase20:38781] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-13 03:16:45,111 INFO [RS-EventLoopGroup-8-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:54295, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.4 (auth:SIMPLE), service=RegionServerStatusService 2023-07-13 03:16:45,111 INFO [RS-EventLoopGroup-8-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:52157, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.6 (auth:SIMPLE), service=RegionServerStatusService 2023-07-13 03:16:45,112 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=39355] master.ServerManager(394): Registering regionserver=jenkins-hbase20.apache.org,43619,1689218203580 2023-07-13 03:16:45,112 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,39355,1689218203363] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-13 03:16:45,112 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,39355,1689218203363] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-13 03:16:45,112 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=39355] master.ServerManager(394): Registering regionserver=jenkins-hbase20.apache.org,38781,1689218204060 2023-07-13 03:16:45,112 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,39355,1689218203363] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-13 03:16:45,112 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,39355,1689218203363] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-13 03:16:45,112 DEBUG [RS:2;jenkins-hbase20:38781] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5 2023-07-13 03:16:45,112 DEBUG [RS:0;jenkins-hbase20:43619] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5 2023-07-13 03:16:45,112 DEBUG [RS:2;jenkins-hbase20:38781] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:40633 2023-07-13 03:16:45,112 DEBUG [RS:0;jenkins-hbase20:43619] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:40633 2023-07-13 03:16:45,112 DEBUG [RS:2;jenkins-hbase20:38781] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=42411 2023-07-13 03:16:45,113 DEBUG [RS:0;jenkins-hbase20:43619] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=42411 2023-07-13 03:16:45,126 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase20.apache.org,34063,1689218203851] 2023-07-13 03:16:45,126 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase20.apache.org,38781,1689218204060] 2023-07-13 03:16:45,127 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase20.apache.org,43619,1689218203580] 2023-07-13 03:16:45,127 DEBUG [RS:1;jenkins-hbase20:34063] zookeeper.ZKUtil(162): regionserver:34063-0x1008454bb3b0002, quorum=127.0.0.1:62986, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,38781,1689218204060 2023-07-13 03:16:45,128 DEBUG [RS:1;jenkins-hbase20:34063] zookeeper.ZKUtil(162): regionserver:34063-0x1008454bb3b0002, quorum=127.0.0.1:62986, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,34063,1689218203851 2023-07-13 03:16:45,128 DEBUG [RS:1;jenkins-hbase20:34063] zookeeper.ZKUtil(162): regionserver:34063-0x1008454bb3b0002, quorum=127.0.0.1:62986, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,43619,1689218203580 2023-07-13 03:16:45,129 DEBUG [RS:2;jenkins-hbase20:38781] zookeeper.ZKUtil(162): regionserver:38781-0x1008454bb3b0003, quorum=127.0.0.1:62986, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,38781,1689218204060 2023-07-13 03:16:45,130 WARN [RS:2;jenkins-hbase20:38781] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-13 03:16:45,130 INFO [RS:2;jenkins-hbase20:38781] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-13 03:16:45,130 DEBUG [RS:2;jenkins-hbase20:38781] regionserver.HRegionServer(1948): logDir=hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/WALs/jenkins-hbase20.apache.org,38781,1689218204060 2023-07-13 03:16:45,134 DEBUG [RS:0;jenkins-hbase20:43619] zookeeper.ZKUtil(162): regionserver:43619-0x1008454bb3b0001, quorum=127.0.0.1:62986, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,43619,1689218203580 2023-07-13 03:16:45,134 DEBUG [RS:1;jenkins-hbase20:34063] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-13 03:16:45,134 WARN [RS:0;jenkins-hbase20:43619] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-13 03:16:45,134 INFO [RS:0;jenkins-hbase20:43619] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-13 03:16:45,134 INFO [RS:1;jenkins-hbase20:34063] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-13 03:16:45,135 DEBUG [RS:0;jenkins-hbase20:43619] regionserver.HRegionServer(1948): logDir=hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/WALs/jenkins-hbase20.apache.org,43619,1689218203580 2023-07-13 03:16:45,139 INFO [RS:1;jenkins-hbase20:34063] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-13 03:16:45,140 INFO [RS:1;jenkins-hbase20:34063] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-13 03:16:45,140 INFO [RS:1;jenkins-hbase20:34063] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:45,141 INFO [RS:1;jenkins-hbase20:34063] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-13 03:16:45,142 DEBUG [RS:2;jenkins-hbase20:38781] zookeeper.ZKUtil(162): regionserver:38781-0x1008454bb3b0003, quorum=127.0.0.1:62986, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,38781,1689218204060 2023-07-13 03:16:45,142 DEBUG [RS:2;jenkins-hbase20:38781] zookeeper.ZKUtil(162): regionserver:38781-0x1008454bb3b0003, quorum=127.0.0.1:62986, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,34063,1689218203851 2023-07-13 03:16:45,143 DEBUG [RS:2;jenkins-hbase20:38781] zookeeper.ZKUtil(162): regionserver:38781-0x1008454bb3b0003, quorum=127.0.0.1:62986, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,43619,1689218203580 2023-07-13 03:16:45,144 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 03:16:45,145 DEBUG [RS:2;jenkins-hbase20:38781] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-13 03:16:45,145 INFO [RS:2;jenkins-hbase20:38781] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-13 03:16:45,146 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-13 03:16:45,148 INFO [RS:2;jenkins-hbase20:38781] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-13 03:16:45,151 INFO [RS:2;jenkins-hbase20:38781] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-13 03:16:45,151 INFO [RS:2;jenkins-hbase20:38781] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:45,151 INFO [RS:2;jenkins-hbase20:38781] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-13 03:16:45,152 INFO [RS:1;jenkins-hbase20:34063] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:45,152 DEBUG [RS:1;jenkins-hbase20:34063] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:45,152 DEBUG [RS:1;jenkins-hbase20:34063] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:45,152 DEBUG [RS:1;jenkins-hbase20:34063] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:45,152 DEBUG [RS:1;jenkins-hbase20:34063] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:45,152 DEBUG [RS:1;jenkins-hbase20:34063] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:45,152 DEBUG [RS:1;jenkins-hbase20:34063] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-07-13 03:16:45,153 DEBUG [RS:1;jenkins-hbase20:34063] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:45,153 DEBUG [RS:1;jenkins-hbase20:34063] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:45,153 DEBUG [RS:1;jenkins-hbase20:34063] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:45,153 DEBUG [RS:1;jenkins-hbase20:34063] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:45,166 DEBUG [RS:0;jenkins-hbase20:43619] zookeeper.ZKUtil(162): regionserver:43619-0x1008454bb3b0001, quorum=127.0.0.1:62986, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,38781,1689218204060 2023-07-13 03:16:45,166 INFO [RS:2;jenkins-hbase20:38781] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:45,166 INFO [RS:1;jenkins-hbase20:34063] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:45,168 DEBUG [RS:0;jenkins-hbase20:43619] zookeeper.ZKUtil(162): regionserver:43619-0x1008454bb3b0001, quorum=127.0.0.1:62986, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,34063,1689218203851 2023-07-13 03:16:45,168 INFO [RS:1;jenkins-hbase20:34063] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:45,168 INFO [RS:1;jenkins-hbase20:34063] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:45,168 INFO [RS:1;jenkins-hbase20:34063] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:45,168 DEBUG [RS:2;jenkins-hbase20:38781] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:45,168 DEBUG [RS:2;jenkins-hbase20:38781] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:45,168 DEBUG [RS:2;jenkins-hbase20:38781] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:45,168 DEBUG [RS:2;jenkins-hbase20:38781] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:45,169 DEBUG [RS:2;jenkins-hbase20:38781] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:45,169 DEBUG [RS:2;jenkins-hbase20:38781] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-07-13 03:16:45,169 DEBUG [RS:2;jenkins-hbase20:38781] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:45,169 DEBUG [RS:2;jenkins-hbase20:38781] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:45,169 DEBUG [RS:2;jenkins-hbase20:38781] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:45,169 DEBUG [RS:2;jenkins-hbase20:38781] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:45,169 DEBUG [RS:0;jenkins-hbase20:43619] zookeeper.ZKUtil(162): regionserver:43619-0x1008454bb3b0001, quorum=127.0.0.1:62986, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,43619,1689218203580 2023-07-13 03:16:45,172 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/data/hbase/meta/1588230740/info 2023-07-13 03:16:45,172 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-13 03:16:45,172 DEBUG [RS:0;jenkins-hbase20:43619] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-13 03:16:45,173 INFO [RS:0;jenkins-hbase20:43619] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-13 03:16:45,173 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 03:16:45,173 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-13 03:16:45,178 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/data/hbase/meta/1588230740/rep_barrier 2023-07-13 03:16:45,178 INFO [RS:2;jenkins-hbase20:38781] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:45,179 INFO [RS:2;jenkins-hbase20:38781] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:45,179 INFO [RS:2;jenkins-hbase20:38781] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:45,179 INFO [RS:2;jenkins-hbase20:38781] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:45,179 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-13 03:16:45,182 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 03:16:45,182 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-13 03:16:45,185 INFO [RS:1;jenkins-hbase20:34063] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-13 03:16:45,185 INFO [RS:1;jenkins-hbase20:34063] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,34063,1689218203851-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:45,185 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/data/hbase/meta/1588230740/table 2023-07-13 03:16:45,186 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-13 03:16:45,187 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 03:16:45,194 INFO [RS:0;jenkins-hbase20:43619] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-13 03:16:45,195 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/data/hbase/meta/1588230740 2023-07-13 03:16:45,196 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/data/hbase/meta/1588230740 2023-07-13 03:16:45,200 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-13 03:16:45,201 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-13 03:16:45,202 INFO [RS:2;jenkins-hbase20:38781] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-13 03:16:45,203 INFO [RS:2;jenkins-hbase20:38781] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,38781,1689218204060-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:45,215 INFO [RS:0;jenkins-hbase20:43619] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-13 03:16:45,215 INFO [RS:0;jenkins-hbase20:43619] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:45,215 INFO [RS:0;jenkins-hbase20:43619] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-13 03:16:45,216 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 03:16:45,217 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9494115680, jitterRate=-0.11579157412052155}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-13 03:16:45,217 INFO [RS:0;jenkins-hbase20:43619] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:45,217 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-13 03:16:45,217 DEBUG [RS:0;jenkins-hbase20:43619] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:45,217 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-13 03:16:45,217 DEBUG [RS:0;jenkins-hbase20:43619] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:45,217 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-13 03:16:45,217 DEBUG [RS:0;jenkins-hbase20:43619] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:45,217 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-13 03:16:45,217 DEBUG [RS:0;jenkins-hbase20:43619] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:45,217 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-13 03:16:45,217 DEBUG [RS:0;jenkins-hbase20:43619] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:45,217 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-13 03:16:45,217 DEBUG [RS:0;jenkins-hbase20:43619] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-07-13 03:16:45,217 DEBUG [RS:0;jenkins-hbase20:43619] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:45,218 DEBUG [RS:0;jenkins-hbase20:43619] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:45,218 DEBUG [RS:0;jenkins-hbase20:43619] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:45,218 DEBUG [RS:0;jenkins-hbase20:43619] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:45,223 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-13 03:16:45,223 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-13 03:16:45,224 INFO [RS:0;jenkins-hbase20:43619] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:45,225 INFO [RS:1;jenkins-hbase20:34063] regionserver.Replication(203): jenkins-hbase20.apache.org,34063,1689218203851 started 2023-07-13 03:16:45,225 INFO [RS:0;jenkins-hbase20:43619] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:45,225 INFO [RS:1;jenkins-hbase20:34063] regionserver.HRegionServer(1637): Serving as jenkins-hbase20.apache.org,34063,1689218203851, RpcServer on jenkins-hbase20.apache.org/148.251.75.209:34063, sessionid=0x1008454bb3b0002 2023-07-13 03:16:45,225 INFO [RS:0;jenkins-hbase20:43619] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:45,225 DEBUG [RS:1;jenkins-hbase20:34063] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-13 03:16:45,225 INFO [RS:0;jenkins-hbase20:43619] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:45,225 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-13 03:16:45,225 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-13 03:16:45,225 DEBUG [RS:1;jenkins-hbase20:34063] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase20.apache.org,34063,1689218203851 2023-07-13 03:16:45,225 DEBUG [RS:1;jenkins-hbase20:34063] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,34063,1689218203851' 2023-07-13 03:16:45,225 DEBUG [RS:1;jenkins-hbase20:34063] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-13 03:16:45,225 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-13 03:16:45,226 DEBUG [RS:1;jenkins-hbase20:34063] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-13 03:16:45,226 DEBUG [RS:1;jenkins-hbase20:34063] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-13 03:16:45,226 DEBUG [RS:1;jenkins-hbase20:34063] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-13 03:16:45,226 DEBUG [RS:1;jenkins-hbase20:34063] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase20.apache.org,34063,1689218203851 2023-07-13 03:16:45,226 DEBUG [RS:1;jenkins-hbase20:34063] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,34063,1689218203851' 2023-07-13 03:16:45,226 DEBUG [RS:1;jenkins-hbase20:34063] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-13 03:16:45,227 DEBUG [RS:1;jenkins-hbase20:34063] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-13 03:16:45,227 DEBUG [RS:1;jenkins-hbase20:34063] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-13 03:16:45,227 INFO [RS:1;jenkins-hbase20:34063] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-13 03:16:45,230 INFO [RS:1;jenkins-hbase20:34063] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:45,230 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-13 03:16:45,237 INFO [RS:2;jenkins-hbase20:38781] regionserver.Replication(203): jenkins-hbase20.apache.org,38781,1689218204060 started 2023-07-13 03:16:45,237 INFO [RS:2;jenkins-hbase20:38781] regionserver.HRegionServer(1637): Serving as jenkins-hbase20.apache.org,38781,1689218204060, RpcServer on jenkins-hbase20.apache.org/148.251.75.209:38781, sessionid=0x1008454bb3b0003 2023-07-13 03:16:45,239 DEBUG [RS:2;jenkins-hbase20:38781] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-13 03:16:45,239 DEBUG [RS:2;jenkins-hbase20:38781] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase20.apache.org,38781,1689218204060 2023-07-13 03:16:45,239 DEBUG [RS:2;jenkins-hbase20:38781] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,38781,1689218204060' 2023-07-13 03:16:45,239 DEBUG [RS:2;jenkins-hbase20:38781] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-13 03:16:45,239 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-13 03:16:45,239 DEBUG [RS:1;jenkins-hbase20:34063] zookeeper.ZKUtil(398): regionserver:34063-0x1008454bb3b0002, quorum=127.0.0.1:62986, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-13 03:16:45,240 DEBUG [RS:2;jenkins-hbase20:38781] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-13 03:16:45,240 INFO [RS:1;jenkins-hbase20:34063] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-13 03:16:45,240 DEBUG [RS:2;jenkins-hbase20:38781] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-13 03:16:45,240 DEBUG [RS:2;jenkins-hbase20:38781] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-13 03:16:45,240 DEBUG [RS:2;jenkins-hbase20:38781] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase20.apache.org,38781,1689218204060 2023-07-13 03:16:45,240 DEBUG [RS:2;jenkins-hbase20:38781] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,38781,1689218204060' 2023-07-13 03:16:45,240 DEBUG [RS:2;jenkins-hbase20:38781] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-13 03:16:45,240 INFO [RS:1;jenkins-hbase20:34063] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:45,241 INFO [RS:1;jenkins-hbase20:34063] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:45,241 DEBUG [RS:2;jenkins-hbase20:38781] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-13 03:16:45,241 DEBUG [RS:2;jenkins-hbase20:38781] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-13 03:16:45,241 INFO [RS:2;jenkins-hbase20:38781] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-13 03:16:45,241 INFO [RS:2;jenkins-hbase20:38781] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:45,241 DEBUG [RS:2;jenkins-hbase20:38781] zookeeper.ZKUtil(398): regionserver:38781-0x1008454bb3b0003, quorum=127.0.0.1:62986, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-13 03:16:45,241 INFO [RS:2;jenkins-hbase20:38781] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-13 03:16:45,241 INFO [RS:2;jenkins-hbase20:38781] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:45,241 INFO [RS:2;jenkins-hbase20:38781] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:45,247 INFO [RS:0;jenkins-hbase20:43619] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-13 03:16:45,247 INFO [RS:0;jenkins-hbase20:43619] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,43619,1689218203580-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:45,262 INFO [RS:0;jenkins-hbase20:43619] regionserver.Replication(203): jenkins-hbase20.apache.org,43619,1689218203580 started 2023-07-13 03:16:45,262 INFO [RS:0;jenkins-hbase20:43619] regionserver.HRegionServer(1637): Serving as jenkins-hbase20.apache.org,43619,1689218203580, RpcServer on jenkins-hbase20.apache.org/148.251.75.209:43619, sessionid=0x1008454bb3b0001 2023-07-13 03:16:45,262 DEBUG [RS:0;jenkins-hbase20:43619] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-13 03:16:45,262 DEBUG [RS:0;jenkins-hbase20:43619] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase20.apache.org,43619,1689218203580 2023-07-13 03:16:45,262 DEBUG [RS:0;jenkins-hbase20:43619] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,43619,1689218203580' 2023-07-13 03:16:45,262 DEBUG [RS:0;jenkins-hbase20:43619] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-13 03:16:45,262 DEBUG [RS:0;jenkins-hbase20:43619] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-13 03:16:45,263 DEBUG [RS:0;jenkins-hbase20:43619] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-13 03:16:45,263 DEBUG [RS:0;jenkins-hbase20:43619] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-13 03:16:45,263 DEBUG [RS:0;jenkins-hbase20:43619] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase20.apache.org,43619,1689218203580 2023-07-13 03:16:45,263 DEBUG [RS:0;jenkins-hbase20:43619] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,43619,1689218203580' 2023-07-13 03:16:45,263 DEBUG [RS:0;jenkins-hbase20:43619] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-13 03:16:45,263 DEBUG [RS:0;jenkins-hbase20:43619] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-13 03:16:45,263 DEBUG [RS:0;jenkins-hbase20:43619] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-13 03:16:45,264 INFO [RS:0;jenkins-hbase20:43619] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-13 03:16:45,264 INFO [RS:0;jenkins-hbase20:43619] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:45,264 DEBUG [RS:0;jenkins-hbase20:43619] zookeeper.ZKUtil(398): regionserver:43619-0x1008454bb3b0001, quorum=127.0.0.1:62986, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-13 03:16:45,264 INFO [RS:0;jenkins-hbase20:43619] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-13 03:16:45,264 INFO [RS:0;jenkins-hbase20:43619] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:45,264 INFO [RS:0;jenkins-hbase20:43619] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:45,344 INFO [RS:2;jenkins-hbase20:38781] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C38781%2C1689218204060, suffix=, logDir=hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/WALs/jenkins-hbase20.apache.org,38781,1689218204060, archiveDir=hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/oldWALs, maxLogs=32 2023-07-13 03:16:45,344 INFO [RS:1;jenkins-hbase20:34063] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C34063%2C1689218203851, suffix=, logDir=hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/WALs/jenkins-hbase20.apache.org,34063,1689218203851, archiveDir=hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/oldWALs, maxLogs=32 2023-07-13 03:16:45,367 INFO [RS:0;jenkins-hbase20:43619] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C43619%2C1689218203580, suffix=, logDir=hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/WALs/jenkins-hbase20.apache.org,43619,1689218203580, archiveDir=hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/oldWALs, maxLogs=32 2023-07-13 03:16:45,379 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38067,DS-07c3d157-1b5a-4d9d-b63e-7a40ce4cc995,DISK] 2023-07-13 03:16:45,379 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40055,DS-42f78698-4b70-4281-87db-568fcefb78de,DISK] 2023-07-13 03:16:45,383 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45819,DS-e2a42619-dd6a-4639-b2b8-070717de9c2f,DISK] 2023-07-13 03:16:45,390 DEBUG [jenkins-hbase20:39355] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-13 03:16:45,390 DEBUG [jenkins-hbase20:39355] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-13 03:16:45,390 DEBUG [jenkins-hbase20:39355] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-13 03:16:45,390 DEBUG [jenkins-hbase20:39355] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-13 03:16:45,390 DEBUG [jenkins-hbase20:39355] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-13 03:16:45,390 DEBUG [jenkins-hbase20:39355] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-13 03:16:45,397 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,43619,1689218203580, state=OPENING 2023-07-13 03:16:45,398 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45819,DS-e2a42619-dd6a-4639-b2b8-070717de9c2f,DISK] 2023-07-13 03:16:45,398 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38067,DS-07c3d157-1b5a-4d9d-b63e-7a40ce4cc995,DISK] 2023-07-13 03:16:45,398 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40055,DS-42f78698-4b70-4281-87db-568fcefb78de,DISK] 2023-07-13 03:16:45,399 INFO [RS:1;jenkins-hbase20:34063] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/WALs/jenkins-hbase20.apache.org,34063,1689218203851/jenkins-hbase20.apache.org%2C34063%2C1689218203851.1689218205345 2023-07-13 03:16:45,399 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-13 03:16:45,400 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45819,DS-e2a42619-dd6a-4639-b2b8-070717de9c2f,DISK] 2023-07-13 03:16:45,400 DEBUG [Listener at localhost.localdomain/44085-EventThread] zookeeper.ZKWatcher(600): master:39355-0x1008454bb3b0000, quorum=127.0.0.1:62986, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 03:16:45,401 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,43619,1689218203580}] 2023-07-13 03:16:45,401 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-13 03:16:45,401 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40055,DS-42f78698-4b70-4281-87db-568fcefb78de,DISK] 2023-07-13 03:16:45,402 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38067,DS-07c3d157-1b5a-4d9d-b63e-7a40ce4cc995,DISK] 2023-07-13 03:16:45,402 DEBUG [RS:1;jenkins-hbase20:34063] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45819,DS-e2a42619-dd6a-4639-b2b8-070717de9c2f,DISK], DatanodeInfoWithStorage[127.0.0.1:38067,DS-07c3d157-1b5a-4d9d-b63e-7a40ce4cc995,DISK], DatanodeInfoWithStorage[127.0.0.1:40055,DS-42f78698-4b70-4281-87db-568fcefb78de,DISK]] 2023-07-13 03:16:45,408 INFO [RS:2;jenkins-hbase20:38781] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/WALs/jenkins-hbase20.apache.org,38781,1689218204060/jenkins-hbase20.apache.org%2C38781%2C1689218204060.1689218205348 2023-07-13 03:16:45,409 INFO [RS:0;jenkins-hbase20:43619] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/WALs/jenkins-hbase20.apache.org,43619,1689218203580/jenkins-hbase20.apache.org%2C43619%2C1689218203580.1689218205367 2023-07-13 03:16:45,410 DEBUG [RS:2;jenkins-hbase20:38781] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45819,DS-e2a42619-dd6a-4639-b2b8-070717de9c2f,DISK], DatanodeInfoWithStorage[127.0.0.1:38067,DS-07c3d157-1b5a-4d9d-b63e-7a40ce4cc995,DISK], DatanodeInfoWithStorage[127.0.0.1:40055,DS-42f78698-4b70-4281-87db-568fcefb78de,DISK]] 2023-07-13 03:16:45,414 DEBUG [RS:0;jenkins-hbase20:43619] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38067,DS-07c3d157-1b5a-4d9d-b63e-7a40ce4cc995,DISK], DatanodeInfoWithStorage[127.0.0.1:40055,DS-42f78698-4b70-4281-87db-568fcefb78de,DISK], DatanodeInfoWithStorage[127.0.0.1:45819,DS-e2a42619-dd6a-4639-b2b8-070717de9c2f,DISK]] 2023-07-13 03:16:45,536 WARN [ReadOnlyZKClient-127.0.0.1:62986@0x3aa549a3] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-13 03:16:45,536 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase20.apache.org,39355,1689218203363] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-13 03:16:45,543 INFO [RS-EventLoopGroup-9-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:59042, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-13 03:16:45,544 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=43619] ipc.CallRunner(144): callId: 0 service: ClientService methodName: Get size: 88 connection: 148.251.75.209:59042 deadline: 1689218265544, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase20.apache.org,43619,1689218203580 2023-07-13 03:16:45,567 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase20.apache.org,43619,1689218203580 2023-07-13 03:16:45,570 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-13 03:16:45,575 INFO [RS-EventLoopGroup-9-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:59044, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-13 03:16:45,588 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-13 03:16:45,588 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-13 03:16:45,590 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C43619%2C1689218203580.meta, suffix=.meta, logDir=hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/WALs/jenkins-hbase20.apache.org,43619,1689218203580, archiveDir=hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/oldWALs, maxLogs=32 2023-07-13 03:16:45,609 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38067,DS-07c3d157-1b5a-4d9d-b63e-7a40ce4cc995,DISK] 2023-07-13 03:16:45,610 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45819,DS-e2a42619-dd6a-4639-b2b8-070717de9c2f,DISK] 2023-07-13 03:16:45,641 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40055,DS-42f78698-4b70-4281-87db-568fcefb78de,DISK] 2023-07-13 03:16:45,651 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/WALs/jenkins-hbase20.apache.org,43619,1689218203580/jenkins-hbase20.apache.org%2C43619%2C1689218203580.meta.1689218205591.meta 2023-07-13 03:16:45,654 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38067,DS-07c3d157-1b5a-4d9d-b63e-7a40ce4cc995,DISK], DatanodeInfoWithStorage[127.0.0.1:45819,DS-e2a42619-dd6a-4639-b2b8-070717de9c2f,DISK], DatanodeInfoWithStorage[127.0.0.1:40055,DS-42f78698-4b70-4281-87db-568fcefb78de,DISK]] 2023-07-13 03:16:45,655 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-13 03:16:45,655 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-13 03:16:45,655 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-13 03:16:45,655 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-13 03:16:45,656 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-13 03:16:45,656 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 03:16:45,656 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-13 03:16:45,656 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-13 03:16:45,663 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-13 03:16:45,665 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/data/hbase/meta/1588230740/info 2023-07-13 03:16:45,665 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/data/hbase/meta/1588230740/info 2023-07-13 03:16:45,665 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-13 03:16:45,667 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 03:16:45,667 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-13 03:16:45,668 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/data/hbase/meta/1588230740/rep_barrier 2023-07-13 03:16:45,668 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/data/hbase/meta/1588230740/rep_barrier 2023-07-13 03:16:45,669 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-13 03:16:45,670 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 03:16:45,670 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-13 03:16:45,672 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/data/hbase/meta/1588230740/table 2023-07-13 03:16:45,672 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/data/hbase/meta/1588230740/table 2023-07-13 03:16:45,673 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-13 03:16:45,673 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 03:16:45,675 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/data/hbase/meta/1588230740 2023-07-13 03:16:45,676 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/data/hbase/meta/1588230740 2023-07-13 03:16:45,681 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-13 03:16:45,691 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-13 03:16:45,696 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11578189920, jitterRate=0.07830296456813812}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-13 03:16:45,696 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-13 03:16:45,698 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689218205566 2023-07-13 03:16:45,707 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-13 03:16:45,709 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-13 03:16:45,710 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,43619,1689218203580, state=OPEN 2023-07-13 03:16:45,711 DEBUG [Listener at localhost.localdomain/44085-EventThread] zookeeper.ZKWatcher(600): master:39355-0x1008454bb3b0000, quorum=127.0.0.1:62986, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-13 03:16:45,711 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-13 03:16:45,717 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-13 03:16:45,717 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,43619,1689218203580 in 310 msec 2023-07-13 03:16:45,719 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-13 03:16:45,719 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 492 msec 2023-07-13 03:16:45,722 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 843 msec 2023-07-13 03:16:45,722 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689218205722, completionTime=-1 2023-07-13 03:16:45,722 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-13 03:16:45,722 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-13 03:16:45,729 INFO [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-13 03:16:45,729 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689218265729 2023-07-13 03:16:45,729 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689218325729 2023-07-13 03:16:45,729 INFO [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 6 msec 2023-07-13 03:16:45,736 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,39355,1689218203363-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:45,736 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,39355,1689218203363-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:45,736 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,39355,1689218203363-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:45,736 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase20:39355, period=300000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:45,736 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:45,736 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-13 03:16:45,737 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-13 03:16:45,739 DEBUG [master/jenkins-hbase20:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-13 03:16:45,742 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-13 03:16:45,746 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-13 03:16:45,747 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-13 03:16:45,749 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/.tmp/data/hbase/namespace/e582e8d4769bf8c2dea6f99da2e9924c 2023-07-13 03:16:45,749 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/.tmp/data/hbase/namespace/e582e8d4769bf8c2dea6f99da2e9924c empty. 2023-07-13 03:16:45,750 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/.tmp/data/hbase/namespace/e582e8d4769bf8c2dea6f99da2e9924c 2023-07-13 03:16:45,750 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-13 03:16:45,777 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-13 03:16:45,779 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => e582e8d4769bf8c2dea6f99da2e9924c, NAME => 'hbase:namespace,,1689218205736.e582e8d4769bf8c2dea6f99da2e9924c.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/.tmp 2023-07-13 03:16:45,814 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689218205736.e582e8d4769bf8c2dea6f99da2e9924c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 03:16:45,814 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing e582e8d4769bf8c2dea6f99da2e9924c, disabling compactions & flushes 2023-07-13 03:16:45,814 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689218205736.e582e8d4769bf8c2dea6f99da2e9924c. 2023-07-13 03:16:45,814 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689218205736.e582e8d4769bf8c2dea6f99da2e9924c. 2023-07-13 03:16:45,814 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689218205736.e582e8d4769bf8c2dea6f99da2e9924c. after waiting 0 ms 2023-07-13 03:16:45,814 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689218205736.e582e8d4769bf8c2dea6f99da2e9924c. 2023-07-13 03:16:45,814 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689218205736.e582e8d4769bf8c2dea6f99da2e9924c. 2023-07-13 03:16:45,814 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for e582e8d4769bf8c2dea6f99da2e9924c: 2023-07-13 03:16:45,817 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-13 03:16:45,820 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689218205736.e582e8d4769bf8c2dea6f99da2e9924c.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689218205820"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689218205820"}]},"ts":"1689218205820"} 2023-07-13 03:16:45,824 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-13 03:16:45,825 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-13 03:16:45,826 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689218205826"}]},"ts":"1689218205826"} 2023-07-13 03:16:45,827 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-13 03:16:45,829 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-13 03:16:45,830 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-13 03:16:45,830 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-13 03:16:45,830 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-13 03:16:45,830 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-13 03:16:45,830 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=e582e8d4769bf8c2dea6f99da2e9924c, ASSIGN}] 2023-07-13 03:16:45,836 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=e582e8d4769bf8c2dea6f99da2e9924c, ASSIGN 2023-07-13 03:16:45,837 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=e582e8d4769bf8c2dea6f99da2e9924c, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,38781,1689218204060; forceNewPlan=false, retain=false 2023-07-13 03:16:45,987 INFO [jenkins-hbase20:39355] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-13 03:16:45,989 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=e582e8d4769bf8c2dea6f99da2e9924c, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,38781,1689218204060 2023-07-13 03:16:45,989 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689218205736.e582e8d4769bf8c2dea6f99da2e9924c.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689218205989"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689218205989"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689218205989"}]},"ts":"1689218205989"} 2023-07-13 03:16:45,991 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure e582e8d4769bf8c2dea6f99da2e9924c, server=jenkins-hbase20.apache.org,38781,1689218204060}] 2023-07-13 03:16:46,051 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase20.apache.org,39355,1689218203363] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-13 03:16:46,054 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase20.apache.org,39355,1689218203363] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-13 03:16:46,056 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=7, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-13 03:16:46,057 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=7, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-13 03:16:46,059 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/.tmp/data/hbase/rsgroup/bcaf4edb61699cc22e3e13e2d72deddc 2023-07-13 03:16:46,061 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/.tmp/data/hbase/rsgroup/bcaf4edb61699cc22e3e13e2d72deddc empty. 2023-07-13 03:16:46,062 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/.tmp/data/hbase/rsgroup/bcaf4edb61699cc22e3e13e2d72deddc 2023-07-13 03:16:46,062 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-13 03:16:46,128 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-13 03:16:46,129 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => bcaf4edb61699cc22e3e13e2d72deddc, NAME => 'hbase:rsgroup,,1689218206051.bcaf4edb61699cc22e3e13e2d72deddc.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/.tmp 2023-07-13 03:16:46,142 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689218206051.bcaf4edb61699cc22e3e13e2d72deddc.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 03:16:46,142 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing bcaf4edb61699cc22e3e13e2d72deddc, disabling compactions & flushes 2023-07-13 03:16:46,142 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689218206051.bcaf4edb61699cc22e3e13e2d72deddc. 2023-07-13 03:16:46,142 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689218206051.bcaf4edb61699cc22e3e13e2d72deddc. 2023-07-13 03:16:46,142 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689218206051.bcaf4edb61699cc22e3e13e2d72deddc. after waiting 0 ms 2023-07-13 03:16:46,142 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689218206051.bcaf4edb61699cc22e3e13e2d72deddc. 2023-07-13 03:16:46,142 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689218206051.bcaf4edb61699cc22e3e13e2d72deddc. 2023-07-13 03:16:46,142 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for bcaf4edb61699cc22e3e13e2d72deddc: 2023-07-13 03:16:46,144 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase20.apache.org,38781,1689218204060 2023-07-13 03:16:46,144 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-13 03:16:46,146 INFO [RS-EventLoopGroup-11-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:39834, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-13 03:16:46,147 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=7, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-13 03:16:46,148 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689218206051.bcaf4edb61699cc22e3e13e2d72deddc.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689218206148"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689218206148"}]},"ts":"1689218206148"} 2023-07-13 03:16:46,151 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-13 03:16:46,152 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689218205736.e582e8d4769bf8c2dea6f99da2e9924c. 2023-07-13 03:16:46,152 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e582e8d4769bf8c2dea6f99da2e9924c, NAME => 'hbase:namespace,,1689218205736.e582e8d4769bf8c2dea6f99da2e9924c.', STARTKEY => '', ENDKEY => ''} 2023-07-13 03:16:46,152 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=7, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-13 03:16:46,152 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689218206152"}]},"ts":"1689218206152"} 2023-07-13 03:16:46,152 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace e582e8d4769bf8c2dea6f99da2e9924c 2023-07-13 03:16:46,152 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689218205736.e582e8d4769bf8c2dea6f99da2e9924c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 03:16:46,152 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for e582e8d4769bf8c2dea6f99da2e9924c 2023-07-13 03:16:46,152 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for e582e8d4769bf8c2dea6f99da2e9924c 2023-07-13 03:16:46,153 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-13 03:16:46,154 INFO [StoreOpener-e582e8d4769bf8c2dea6f99da2e9924c-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region e582e8d4769bf8c2dea6f99da2e9924c 2023-07-13 03:16:46,156 DEBUG [StoreOpener-e582e8d4769bf8c2dea6f99da2e9924c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/data/hbase/namespace/e582e8d4769bf8c2dea6f99da2e9924c/info 2023-07-13 03:16:46,156 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-13 03:16:46,156 DEBUG [StoreOpener-e582e8d4769bf8c2dea6f99da2e9924c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/data/hbase/namespace/e582e8d4769bf8c2dea6f99da2e9924c/info 2023-07-13 03:16:46,156 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-13 03:16:46,156 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-13 03:16:46,156 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-13 03:16:46,156 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-13 03:16:46,156 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=7, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=bcaf4edb61699cc22e3e13e2d72deddc, ASSIGN}] 2023-07-13 03:16:46,156 INFO [StoreOpener-e582e8d4769bf8c2dea6f99da2e9924c-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e582e8d4769bf8c2dea6f99da2e9924c columnFamilyName info 2023-07-13 03:16:46,157 INFO [StoreOpener-e582e8d4769bf8c2dea6f99da2e9924c-1] regionserver.HStore(310): Store=e582e8d4769bf8c2dea6f99da2e9924c/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 03:16:46,157 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=8, ppid=7, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=bcaf4edb61699cc22e3e13e2d72deddc, ASSIGN 2023-07-13 03:16:46,158 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=8, ppid=7, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=bcaf4edb61699cc22e3e13e2d72deddc, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,38781,1689218204060; forceNewPlan=false, retain=false 2023-07-13 03:16:46,158 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/data/hbase/namespace/e582e8d4769bf8c2dea6f99da2e9924c 2023-07-13 03:16:46,158 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/data/hbase/namespace/e582e8d4769bf8c2dea6f99da2e9924c 2023-07-13 03:16:46,161 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for e582e8d4769bf8c2dea6f99da2e9924c 2023-07-13 03:16:46,165 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/data/hbase/namespace/e582e8d4769bf8c2dea6f99da2e9924c/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 03:16:46,166 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened e582e8d4769bf8c2dea6f99da2e9924c; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10926131840, jitterRate=0.017575323581695557}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 03:16:46,166 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for e582e8d4769bf8c2dea6f99da2e9924c: 2023-07-13 03:16:46,166 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689218205736.e582e8d4769bf8c2dea6f99da2e9924c., pid=6, masterSystemTime=1689218206144 2023-07-13 03:16:46,172 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689218205736.e582e8d4769bf8c2dea6f99da2e9924c. 2023-07-13 03:16:46,173 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689218205736.e582e8d4769bf8c2dea6f99da2e9924c. 2023-07-13 03:16:46,173 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=e582e8d4769bf8c2dea6f99da2e9924c, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,38781,1689218204060 2023-07-13 03:16:46,174 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689218205736.e582e8d4769bf8c2dea6f99da2e9924c.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689218206173"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689218206173"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689218206173"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689218206173"}]},"ts":"1689218206173"} 2023-07-13 03:16:46,176 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-07-13 03:16:46,176 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure e582e8d4769bf8c2dea6f99da2e9924c, server=jenkins-hbase20.apache.org,38781,1689218204060 in 184 msec 2023-07-13 03:16:46,178 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-07-13 03:16:46,178 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=e582e8d4769bf8c2dea6f99da2e9924c, ASSIGN in 346 msec 2023-07-13 03:16:46,178 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-13 03:16:46,178 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689218206178"}]},"ts":"1689218206178"} 2023-07-13 03:16:46,179 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-13 03:16:46,181 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-13 03:16:46,189 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 444 msec 2023-07-13 03:16:46,245 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:39355-0x1008454bb3b0000, quorum=127.0.0.1:62986, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-13 03:16:46,246 DEBUG [Listener at localhost.localdomain/44085-EventThread] zookeeper.ZKWatcher(600): master:39355-0x1008454bb3b0000, quorum=127.0.0.1:62986, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-13 03:16:46,246 DEBUG [Listener at localhost.localdomain/44085-EventThread] zookeeper.ZKWatcher(600): master:39355-0x1008454bb3b0000, quorum=127.0.0.1:62986, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 03:16:46,249 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-13 03:16:46,250 INFO [RS-EventLoopGroup-11-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:39848, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-13 03:16:46,254 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-13 03:16:46,261 DEBUG [Listener at localhost.localdomain/44085-EventThread] zookeeper.ZKWatcher(600): master:39355-0x1008454bb3b0000, quorum=127.0.0.1:62986, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-13 03:16:46,265 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 10 msec 2023-07-13 03:16:46,277 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-13 03:16:46,280 DEBUG [PEWorker-3] procedure.MasterProcedureScheduler(526): NAMESPACE 'hbase', shared lock count=1 2023-07-13 03:16:46,281 DEBUG [PEWorker-3] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-13 03:16:46,308 INFO [jenkins-hbase20:39355] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-13 03:16:46,309 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=8 updating hbase:meta row=bcaf4edb61699cc22e3e13e2d72deddc, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,38781,1689218204060 2023-07-13 03:16:46,309 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689218206051.bcaf4edb61699cc22e3e13e2d72deddc.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689218206309"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689218206309"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689218206309"}]},"ts":"1689218206309"} 2023-07-13 03:16:46,311 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=8, state=RUNNABLE; OpenRegionProcedure bcaf4edb61699cc22e3e13e2d72deddc, server=jenkins-hbase20.apache.org,38781,1689218204060}] 2023-07-13 03:16:46,469 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689218206051.bcaf4edb61699cc22e3e13e2d72deddc. 2023-07-13 03:16:46,469 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => bcaf4edb61699cc22e3e13e2d72deddc, NAME => 'hbase:rsgroup,,1689218206051.bcaf4edb61699cc22e3e13e2d72deddc.', STARTKEY => '', ENDKEY => ''} 2023-07-13 03:16:46,470 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-13 03:16:46,470 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689218206051.bcaf4edb61699cc22e3e13e2d72deddc. service=MultiRowMutationService 2023-07-13 03:16:46,470 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-13 03:16:46,470 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup bcaf4edb61699cc22e3e13e2d72deddc 2023-07-13 03:16:46,470 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689218206051.bcaf4edb61699cc22e3e13e2d72deddc.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 03:16:46,470 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for bcaf4edb61699cc22e3e13e2d72deddc 2023-07-13 03:16:46,471 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for bcaf4edb61699cc22e3e13e2d72deddc 2023-07-13 03:16:46,474 INFO [StoreOpener-bcaf4edb61699cc22e3e13e2d72deddc-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region bcaf4edb61699cc22e3e13e2d72deddc 2023-07-13 03:16:46,477 DEBUG [StoreOpener-bcaf4edb61699cc22e3e13e2d72deddc-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/data/hbase/rsgroup/bcaf4edb61699cc22e3e13e2d72deddc/m 2023-07-13 03:16:46,477 DEBUG [StoreOpener-bcaf4edb61699cc22e3e13e2d72deddc-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/data/hbase/rsgroup/bcaf4edb61699cc22e3e13e2d72deddc/m 2023-07-13 03:16:46,478 INFO [StoreOpener-bcaf4edb61699cc22e3e13e2d72deddc-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region bcaf4edb61699cc22e3e13e2d72deddc columnFamilyName m 2023-07-13 03:16:46,478 INFO [StoreOpener-bcaf4edb61699cc22e3e13e2d72deddc-1] regionserver.HStore(310): Store=bcaf4edb61699cc22e3e13e2d72deddc/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 03:16:46,479 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/data/hbase/rsgroup/bcaf4edb61699cc22e3e13e2d72deddc 2023-07-13 03:16:46,480 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/data/hbase/rsgroup/bcaf4edb61699cc22e3e13e2d72deddc 2023-07-13 03:16:46,483 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for bcaf4edb61699cc22e3e13e2d72deddc 2023-07-13 03:16:46,485 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/data/hbase/rsgroup/bcaf4edb61699cc22e3e13e2d72deddc/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 03:16:46,486 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened bcaf4edb61699cc22e3e13e2d72deddc; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@2d74743a, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 03:16:46,486 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for bcaf4edb61699cc22e3e13e2d72deddc: 2023-07-13 03:16:46,486 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689218206051.bcaf4edb61699cc22e3e13e2d72deddc., pid=11, masterSystemTime=1689218206463 2023-07-13 03:16:46,488 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689218206051.bcaf4edb61699cc22e3e13e2d72deddc. 2023-07-13 03:16:46,488 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689218206051.bcaf4edb61699cc22e3e13e2d72deddc. 2023-07-13 03:16:46,488 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=8 updating hbase:meta row=bcaf4edb61699cc22e3e13e2d72deddc, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,38781,1689218204060 2023-07-13 03:16:46,488 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689218206051.bcaf4edb61699cc22e3e13e2d72deddc.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689218206488"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689218206488"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689218206488"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689218206488"}]},"ts":"1689218206488"} 2023-07-13 03:16:46,491 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=8 2023-07-13 03:16:46,492 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=8, state=SUCCESS; OpenRegionProcedure bcaf4edb61699cc22e3e13e2d72deddc, server=jenkins-hbase20.apache.org,38781,1689218204060 in 179 msec 2023-07-13 03:16:46,493 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=7 2023-07-13 03:16:46,493 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=7, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=bcaf4edb61699cc22e3e13e2d72deddc, ASSIGN in 336 msec 2023-07-13 03:16:46,659 DEBUG [Listener at localhost.localdomain/44085-EventThread] zookeeper.ZKWatcher(600): master:39355-0x1008454bb3b0000, quorum=127.0.0.1:62986, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-13 03:16:46,780 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 503 msec 2023-07-13 03:16:46,781 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=7, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-13 03:16:46,781 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689218206781"}]},"ts":"1689218206781"} 2023-07-13 03:16:46,782 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-13 03:16:46,976 DEBUG [Listener at localhost.localdomain/44085-EventThread] zookeeper.ZKWatcher(600): master:39355-0x1008454bb3b0000, quorum=127.0.0.1:62986, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-13 03:16:46,977 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=7, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-13 03:16:46,978 DEBUG [Listener at localhost.localdomain/44085-EventThread] zookeeper.ZKWatcher(600): master:39355-0x1008454bb3b0000, quorum=127.0.0.1:62986, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-13 03:16:46,978 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 2.699sec 2023-07-13 03:16:46,978 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 926 msec 2023-07-13 03:16:46,978 INFO [master/jenkins-hbase20:0:becomeActiveMaster] quotas.MasterQuotaManager(103): Quota table not found. Creating... 2023-07-13 03:16:46,979 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-13 03:16:46,981 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:quota 2023-07-13 03:16:46,982 INFO [master/jenkins-hbase20:0:becomeActiveMaster] quotas.MasterQuotaManager(107): Initializing quota support 2023-07-13 03:16:46,983 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_PRE_OPERATION 2023-07-13 03:16:46,984 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-13 03:16:46,985 INFO [master/jenkins-hbase20:0:becomeActiveMaster] namespace.NamespaceStateManager(59): Namespace State Manager started. 2023-07-13 03:16:46,986 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/.tmp/data/hbase/quota/9a36231cb34559f193a2957e83cea336 2023-07-13 03:16:46,986 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/.tmp/data/hbase/quota/9a36231cb34559f193a2957e83cea336 empty. 2023-07-13 03:16:46,987 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/.tmp/data/hbase/quota/9a36231cb34559f193a2957e83cea336 2023-07-13 03:16:46,987 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived hbase:quota regions 2023-07-13 03:16:46,989 INFO [master/jenkins-hbase20:0:becomeActiveMaster] namespace.NamespaceStateManager(222): Finished updating state of 2 namespaces. 2023-07-13 03:16:46,989 INFO [master/jenkins-hbase20:0:becomeActiveMaster] namespace.NamespaceAuditor(50): NamespaceAuditor started. 2023-07-13 03:16:46,992 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:46,992 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:46,992 INFO [master/jenkins-hbase20:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-13 03:16:46,992 INFO [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-13 03:16:46,992 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,39355,1689218203363-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-13 03:16:46,993 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,39355,1689218203363-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-13 03:16:46,993 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-13 03:16:47,002 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/.tmp/data/hbase/quota/.tabledesc/.tableinfo.0000000001 2023-07-13 03:16:47,003 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(7675): creating {ENCODED => 9a36231cb34559f193a2957e83cea336, NAME => 'hbase:quota,,1689218206978.9a36231cb34559f193a2957e83cea336.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/.tmp 2023-07-13 03:16:47,010 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(866): Instantiated hbase:quota,,1689218206978.9a36231cb34559f193a2957e83cea336.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 03:16:47,011 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1604): Closing 9a36231cb34559f193a2957e83cea336, disabling compactions & flushes 2023-07-13 03:16:47,011 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1626): Closing region hbase:quota,,1689218206978.9a36231cb34559f193a2957e83cea336. 2023-07-13 03:16:47,011 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1689218206978.9a36231cb34559f193a2957e83cea336. 2023-07-13 03:16:47,011 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1689218206978.9a36231cb34559f193a2957e83cea336. after waiting 0 ms 2023-07-13 03:16:47,011 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1689218206978.9a36231cb34559f193a2957e83cea336. 2023-07-13 03:16:47,011 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1838): Closed hbase:quota,,1689218206978.9a36231cb34559f193a2957e83cea336. 2023-07-13 03:16:47,011 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1558): Region close journal for 9a36231cb34559f193a2957e83cea336: 2023-07-13 03:16:47,013 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ADD_TO_META 2023-07-13 03:16:47,013 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:quota,,1689218206978.9a36231cb34559f193a2957e83cea336.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689218207013"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689218207013"}]},"ts":"1689218207013"} 2023-07-13 03:16:47,015 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-13 03:16:47,015 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-13 03:16:47,015 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689218207015"}]},"ts":"1689218207015"} 2023-07-13 03:16:47,016 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLING in hbase:meta 2023-07-13 03:16:47,019 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-13 03:16:47,019 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-13 03:16:47,019 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-13 03:16:47,019 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-13 03:16:47,019 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-13 03:16:47,019 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=9a36231cb34559f193a2957e83cea336, ASSIGN}] 2023-07-13 03:16:47,020 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=9a36231cb34559f193a2957e83cea336, ASSIGN 2023-07-13 03:16:47,020 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:quota, region=9a36231cb34559f193a2957e83cea336, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,34063,1689218203851; forceNewPlan=false, retain=false 2023-07-13 03:16:47,060 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase20.apache.org,39355,1689218203363] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-13 03:16:47,060 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase20.apache.org,39355,1689218203363] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-13 03:16:47,068 DEBUG [Listener at localhost.localdomain/44085] zookeeper.ReadOnlyZKClient(139): Connect 0x3b5e8ff2 to 127.0.0.1:62986 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-13 03:16:47,071 DEBUG [Listener at localhost.localdomain/44085-EventThread] zookeeper.ZKWatcher(600): master:39355-0x1008454bb3b0000, quorum=127.0.0.1:62986, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 03:16:47,071 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase20.apache.org,39355,1689218203363] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:47,074 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase20.apache.org,39355,1689218203363] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-13 03:16:47,075 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase20.apache.org,39355,1689218203363] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-13 03:16:47,075 DEBUG [Listener at localhost.localdomain/44085] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3340bcc3, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-13 03:16:47,077 DEBUG [hconnection-0x58a1c1af-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-13 03:16:47,080 INFO [RS-EventLoopGroup-9-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:59050, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-13 03:16:47,082 INFO [Listener at localhost.localdomain/44085] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase20.apache.org,39355,1689218203363 2023-07-13 03:16:47,082 INFO [Listener at localhost.localdomain/44085] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 03:16:47,086 DEBUG [Listener at localhost.localdomain/44085] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-13 03:16:47,088 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:36240, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-13 03:16:47,090 DEBUG [Listener at localhost.localdomain/44085-EventThread] zookeeper.ZKWatcher(600): master:39355-0x1008454bb3b0000, quorum=127.0.0.1:62986, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-13 03:16:47,090 DEBUG [Listener at localhost.localdomain/44085-EventThread] zookeeper.ZKWatcher(600): master:39355-0x1008454bb3b0000, quorum=127.0.0.1:62986, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 03:16:47,091 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39355] master.MasterRpcServices(492): Client=jenkins//148.251.75.209 set balanceSwitch=false 2023-07-13 03:16:47,092 DEBUG [Listener at localhost.localdomain/44085] zookeeper.ReadOnlyZKClient(139): Connect 0x7782f490 to 127.0.0.1:62986 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-13 03:16:47,105 DEBUG [Listener at localhost.localdomain/44085] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2178ef76, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-13 03:16:47,105 INFO [Listener at localhost.localdomain/44085] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:62986 2023-07-13 03:16:47,109 DEBUG [Listener at localhost.localdomain/44085-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:62986, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-13 03:16:47,111 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x1008454bb3b000a connected 2023-07-13 03:16:47,114 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39355] master.HMaster$15(3014): Client=jenkins//148.251.75.209 creating {NAME => 'np1', hbase.namespace.quota.maxregions => '5', hbase.namespace.quota.maxtables => '2'} 2023-07-13 03:16:47,117 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39355] procedure2.ProcedureExecutor(1029): Stored pid=14, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=np1 2023-07-13 03:16:47,123 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39355] master.MasterRpcServices(1230): Checking to see if procedure is done pid=14 2023-07-13 03:16:47,127 DEBUG [Listener at localhost.localdomain/44085-EventThread] zookeeper.ZKWatcher(600): master:39355-0x1008454bb3b0000, quorum=127.0.0.1:62986, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-13 03:16:47,130 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=14, state=SUCCESS; CreateNamespaceProcedure, namespace=np1 in 14 msec 2023-07-13 03:16:47,171 INFO [jenkins-hbase20:39355] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-13 03:16:47,172 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=9a36231cb34559f193a2957e83cea336, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,34063,1689218203851 2023-07-13 03:16:47,172 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:quota,,1689218206978.9a36231cb34559f193a2957e83cea336.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689218207172"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689218207172"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689218207172"}]},"ts":"1689218207172"} 2023-07-13 03:16:47,173 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=15, ppid=13, state=RUNNABLE; OpenRegionProcedure 9a36231cb34559f193a2957e83cea336, server=jenkins-hbase20.apache.org,34063,1689218203851}] 2023-07-13 03:16:47,225 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39355] master.MasterRpcServices(1230): Checking to see if procedure is done pid=14 2023-07-13 03:16:47,234 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39355] master.HMaster$4(2112): Client=jenkins//148.251.75.209 create 'np1:table1', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-13 03:16:47,236 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39355] procedure2.ProcedureExecutor(1029): Stored pid=16, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=np1:table1 2023-07-13 03:16:47,238 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=16, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-13 03:16:47,239 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39355] master.MasterRpcServices(700): Client=jenkins//148.251.75.209 procedure request for creating table: namespace: "np1" qualifier: "table1" procId is: 16 2023-07-13 03:16:47,240 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39355] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-13 03:16:47,241 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:47,242 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-13 03:16:47,244 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=16, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-13 03:16:47,245 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/.tmp/data/np1/table1/0cdf20d390633a07b568f3cdc141614f 2023-07-13 03:16:47,246 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/.tmp/data/np1/table1/0cdf20d390633a07b568f3cdc141614f empty. 2023-07-13 03:16:47,247 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/.tmp/data/np1/table1/0cdf20d390633a07b568f3cdc141614f 2023-07-13 03:16:47,247 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived np1:table1 regions 2023-07-13 03:16:47,259 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/.tmp/data/np1/table1/.tabledesc/.tableinfo.0000000001 2023-07-13 03:16:47,260 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(7675): creating {ENCODED => 0cdf20d390633a07b568f3cdc141614f, NAME => 'np1:table1,,1689218207233.0cdf20d390633a07b568f3cdc141614f.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='np1:table1', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/.tmp 2023-07-13 03:16:47,269 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(866): Instantiated np1:table1,,1689218207233.0cdf20d390633a07b568f3cdc141614f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 03:16:47,270 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1604): Closing 0cdf20d390633a07b568f3cdc141614f, disabling compactions & flushes 2023-07-13 03:16:47,270 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1626): Closing region np1:table1,,1689218207233.0cdf20d390633a07b568f3cdc141614f. 2023-07-13 03:16:47,270 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on np1:table1,,1689218207233.0cdf20d390633a07b568f3cdc141614f. 2023-07-13 03:16:47,270 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1714): Acquired close lock on np1:table1,,1689218207233.0cdf20d390633a07b568f3cdc141614f. after waiting 0 ms 2023-07-13 03:16:47,270 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1724): Updates disabled for region np1:table1,,1689218207233.0cdf20d390633a07b568f3cdc141614f. 2023-07-13 03:16:47,270 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1838): Closed np1:table1,,1689218207233.0cdf20d390633a07b568f3cdc141614f. 2023-07-13 03:16:47,270 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1558): Region close journal for 0cdf20d390633a07b568f3cdc141614f: 2023-07-13 03:16:47,272 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=16, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_ADD_TO_META 2023-07-13 03:16:47,273 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"np1:table1,,1689218207233.0cdf20d390633a07b568f3cdc141614f.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689218207272"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689218207272"}]},"ts":"1689218207272"} 2023-07-13 03:16:47,274 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-13 03:16:47,274 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=16, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-13 03:16:47,274 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689218207274"}]},"ts":"1689218207274"} 2023-07-13 03:16:47,276 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=ENABLING in hbase:meta 2023-07-13 03:16:47,278 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-13 03:16:47,278 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-13 03:16:47,278 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-13 03:16:47,278 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-13 03:16:47,278 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-13 03:16:47,278 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=np1:table1, region=0cdf20d390633a07b568f3cdc141614f, ASSIGN}] 2023-07-13 03:16:47,279 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=np1:table1, region=0cdf20d390633a07b568f3cdc141614f, ASSIGN 2023-07-13 03:16:47,279 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=np1:table1, region=0cdf20d390633a07b568f3cdc141614f, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,34063,1689218203851; forceNewPlan=false, retain=false 2023-07-13 03:16:47,326 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase20.apache.org,34063,1689218203851 2023-07-13 03:16:47,326 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-13 03:16:47,328 INFO [RS-EventLoopGroup-10-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:48034, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-13 03:16:47,334 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:quota,,1689218206978.9a36231cb34559f193a2957e83cea336. 2023-07-13 03:16:47,334 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 9a36231cb34559f193a2957e83cea336, NAME => 'hbase:quota,,1689218206978.9a36231cb34559f193a2957e83cea336.', STARTKEY => '', ENDKEY => ''} 2023-07-13 03:16:47,335 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table quota 9a36231cb34559f193a2957e83cea336 2023-07-13 03:16:47,335 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:quota,,1689218206978.9a36231cb34559f193a2957e83cea336.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 03:16:47,335 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 9a36231cb34559f193a2957e83cea336 2023-07-13 03:16:47,335 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 9a36231cb34559f193a2957e83cea336 2023-07-13 03:16:47,337 INFO [StoreOpener-9a36231cb34559f193a2957e83cea336-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family q of region 9a36231cb34559f193a2957e83cea336 2023-07-13 03:16:47,339 DEBUG [StoreOpener-9a36231cb34559f193a2957e83cea336-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/data/hbase/quota/9a36231cb34559f193a2957e83cea336/q 2023-07-13 03:16:47,339 DEBUG [StoreOpener-9a36231cb34559f193a2957e83cea336-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/data/hbase/quota/9a36231cb34559f193a2957e83cea336/q 2023-07-13 03:16:47,339 INFO [StoreOpener-9a36231cb34559f193a2957e83cea336-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 9a36231cb34559f193a2957e83cea336 columnFamilyName q 2023-07-13 03:16:47,340 INFO [StoreOpener-9a36231cb34559f193a2957e83cea336-1] regionserver.HStore(310): Store=9a36231cb34559f193a2957e83cea336/q, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 03:16:47,340 INFO [StoreOpener-9a36231cb34559f193a2957e83cea336-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family u of region 9a36231cb34559f193a2957e83cea336 2023-07-13 03:16:47,341 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39355] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-13 03:16:47,341 DEBUG [StoreOpener-9a36231cb34559f193a2957e83cea336-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/data/hbase/quota/9a36231cb34559f193a2957e83cea336/u 2023-07-13 03:16:47,341 DEBUG [StoreOpener-9a36231cb34559f193a2957e83cea336-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/data/hbase/quota/9a36231cb34559f193a2957e83cea336/u 2023-07-13 03:16:47,341 INFO [StoreOpener-9a36231cb34559f193a2957e83cea336-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 9a36231cb34559f193a2957e83cea336 columnFamilyName u 2023-07-13 03:16:47,342 INFO [StoreOpener-9a36231cb34559f193a2957e83cea336-1] regionserver.HStore(310): Store=9a36231cb34559f193a2957e83cea336/u, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 03:16:47,343 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/data/hbase/quota/9a36231cb34559f193a2957e83cea336 2023-07-13 03:16:47,343 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/data/hbase/quota/9a36231cb34559f193a2957e83cea336 2023-07-13 03:16:47,345 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:quota descriptor;using region.getMemStoreFlushHeapSize/# of families (64.0 M)) instead. 2023-07-13 03:16:47,346 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 9a36231cb34559f193a2957e83cea336 2023-07-13 03:16:47,349 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/data/hbase/quota/9a36231cb34559f193a2957e83cea336/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 03:16:47,349 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 9a36231cb34559f193a2957e83cea336; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10564908480, jitterRate=-0.01606622338294983}}}, FlushLargeStoresPolicy{flushSizeLowerBound=67108864} 2023-07-13 03:16:47,350 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 9a36231cb34559f193a2957e83cea336: 2023-07-13 03:16:47,351 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:quota,,1689218206978.9a36231cb34559f193a2957e83cea336., pid=15, masterSystemTime=1689218207326 2023-07-13 03:16:47,355 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:quota,,1689218206978.9a36231cb34559f193a2957e83cea336. 2023-07-13 03:16:47,356 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:quota,,1689218206978.9a36231cb34559f193a2957e83cea336. 2023-07-13 03:16:47,358 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=9a36231cb34559f193a2957e83cea336, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,34063,1689218203851 2023-07-13 03:16:47,358 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:quota,,1689218206978.9a36231cb34559f193a2957e83cea336.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689218207358"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689218207358"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689218207358"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689218207358"}]},"ts":"1689218207358"} 2023-07-13 03:16:47,368 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=15, resume processing ppid=13 2023-07-13 03:16:47,368 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=15, ppid=13, state=SUCCESS; OpenRegionProcedure 9a36231cb34559f193a2957e83cea336, server=jenkins-hbase20.apache.org,34063,1689218203851 in 187 msec 2023-07-13 03:16:47,375 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-13 03:16:47,375 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=hbase:quota, region=9a36231cb34559f193a2957e83cea336, ASSIGN in 349 msec 2023-07-13 03:16:47,376 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-13 03:16:47,376 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689218207376"}]},"ts":"1689218207376"} 2023-07-13 03:16:47,377 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLED in hbase:meta 2023-07-13 03:16:47,380 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_POST_OPERATION 2023-07-13 03:16:47,381 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=hbase:quota in 401 msec 2023-07-13 03:16:47,429 INFO [jenkins-hbase20:39355] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-13 03:16:47,431 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=0cdf20d390633a07b568f3cdc141614f, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,34063,1689218203851 2023-07-13 03:16:47,431 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"np1:table1,,1689218207233.0cdf20d390633a07b568f3cdc141614f.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689218207431"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689218207431"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689218207431"}]},"ts":"1689218207431"} 2023-07-13 03:16:47,432 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=17, state=RUNNABLE; OpenRegionProcedure 0cdf20d390633a07b568f3cdc141614f, server=jenkins-hbase20.apache.org,34063,1689218203851}] 2023-07-13 03:16:47,542 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39355] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-13 03:16:47,588 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open np1:table1,,1689218207233.0cdf20d390633a07b568f3cdc141614f. 2023-07-13 03:16:47,589 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 0cdf20d390633a07b568f3cdc141614f, NAME => 'np1:table1,,1689218207233.0cdf20d390633a07b568f3cdc141614f.', STARTKEY => '', ENDKEY => ''} 2023-07-13 03:16:47,589 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table table1 0cdf20d390633a07b568f3cdc141614f 2023-07-13 03:16:47,589 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated np1:table1,,1689218207233.0cdf20d390633a07b568f3cdc141614f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 03:16:47,589 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 0cdf20d390633a07b568f3cdc141614f 2023-07-13 03:16:47,589 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 0cdf20d390633a07b568f3cdc141614f 2023-07-13 03:16:47,591 INFO [StoreOpener-0cdf20d390633a07b568f3cdc141614f-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family fam1 of region 0cdf20d390633a07b568f3cdc141614f 2023-07-13 03:16:47,592 DEBUG [StoreOpener-0cdf20d390633a07b568f3cdc141614f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/data/np1/table1/0cdf20d390633a07b568f3cdc141614f/fam1 2023-07-13 03:16:47,592 DEBUG [StoreOpener-0cdf20d390633a07b568f3cdc141614f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/data/np1/table1/0cdf20d390633a07b568f3cdc141614f/fam1 2023-07-13 03:16:47,593 INFO [StoreOpener-0cdf20d390633a07b568f3cdc141614f-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 0cdf20d390633a07b568f3cdc141614f columnFamilyName fam1 2023-07-13 03:16:47,593 INFO [StoreOpener-0cdf20d390633a07b568f3cdc141614f-1] regionserver.HStore(310): Store=0cdf20d390633a07b568f3cdc141614f/fam1, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 03:16:47,594 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/data/np1/table1/0cdf20d390633a07b568f3cdc141614f 2023-07-13 03:16:47,602 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/data/np1/table1/0cdf20d390633a07b568f3cdc141614f 2023-07-13 03:16:47,607 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 0cdf20d390633a07b568f3cdc141614f 2023-07-13 03:16:47,609 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/data/np1/table1/0cdf20d390633a07b568f3cdc141614f/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 03:16:47,610 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 0cdf20d390633a07b568f3cdc141614f; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11048918240, jitterRate=0.029010698199272156}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 03:16:47,610 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 0cdf20d390633a07b568f3cdc141614f: 2023-07-13 03:16:47,615 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for np1:table1,,1689218207233.0cdf20d390633a07b568f3cdc141614f., pid=18, masterSystemTime=1689218207584 2023-07-13 03:16:47,617 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for np1:table1,,1689218207233.0cdf20d390633a07b568f3cdc141614f. 2023-07-13 03:16:47,617 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened np1:table1,,1689218207233.0cdf20d390633a07b568f3cdc141614f. 2023-07-13 03:16:47,618 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=0cdf20d390633a07b568f3cdc141614f, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,34063,1689218203851 2023-07-13 03:16:47,618 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"np1:table1,,1689218207233.0cdf20d390633a07b568f3cdc141614f.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689218207618"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689218207618"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689218207618"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689218207618"}]},"ts":"1689218207618"} 2023-07-13 03:16:47,622 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=17 2023-07-13 03:16:47,622 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=17, state=SUCCESS; OpenRegionProcedure 0cdf20d390633a07b568f3cdc141614f, server=jenkins-hbase20.apache.org,34063,1689218203851 in 188 msec 2023-07-13 03:16:47,625 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=16 2023-07-13 03:16:47,625 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=16, state=SUCCESS; TransitRegionStateProcedure table=np1:table1, region=0cdf20d390633a07b568f3cdc141614f, ASSIGN in 344 msec 2023-07-13 03:16:47,626 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=16, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-13 03:16:47,626 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689218207626"}]},"ts":"1689218207626"} 2023-07-13 03:16:47,629 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=ENABLED in hbase:meta 2023-07-13 03:16:47,632 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=16, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_POST_OPERATION 2023-07-13 03:16:47,633 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=16, state=SUCCESS; CreateTableProcedure table=np1:table1 in 398 msec 2023-07-13 03:16:47,843 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39355] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-13 03:16:47,844 INFO [Listener at localhost.localdomain/44085] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: np1:table1, procId: 16 completed 2023-07-13 03:16:47,845 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39355] master.HMaster$4(2112): Client=jenkins//148.251.75.209 create 'np1:table2', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-13 03:16:47,846 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39355] procedure2.ProcedureExecutor(1029): Stored pid=19, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=np1:table2 2023-07-13 03:16:47,848 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=19, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=np1:table2 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-13 03:16:47,848 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39355] master.MasterRpcServices(700): Client=jenkins//148.251.75.209 procedure request for creating table: namespace: "np1" qualifier: "table2" procId is: 19 2023-07-13 03:16:47,849 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39355] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-13 03:16:47,862 DEBUG [PEWorker-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-13 03:16:47,863 INFO [RS-EventLoopGroup-10-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:48040, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-13 03:16:47,869 INFO [PEWorker-1] procedure2.ProcedureExecutor(1528): Rolled back pid=19, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.quotas.QuotaExceededException via master-create-table:org.apache.hadoop.hbase.quotas.QuotaExceededException: The table np1:table2 is not allowed to have 6 regions. The total number of regions permitted is only 5, while current region count is 1. This may be transient, please retry later if there are any ongoing split operations in the namespace.; CreateTableProcedure table=np1:table2 exec-time=23 msec 2023-07-13 03:16:47,950 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39355] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-13 03:16:47,952 INFO [Listener at localhost.localdomain/44085] client.HBaseAdmin$TableFuture(3548): Operation: CREATE, Table Name: np1:table2, procId: 19 failed with The table np1:table2 is not allowed to have 6 regions. The total number of regions permitted is only 5, while current region count is 1. This may be transient, please retry later if there are any ongoing split operations in the namespace. 2023-07-13 03:16:47,953 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39355] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:47,953 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39355] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:47,954 INFO [Listener at localhost.localdomain/44085] client.HBaseAdmin$15(890): Started disable of np1:table1 2023-07-13 03:16:47,954 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39355] master.HMaster$11(2418): Client=jenkins//148.251.75.209 disable np1:table1 2023-07-13 03:16:47,955 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39355] procedure2.ProcedureExecutor(1029): Stored pid=20, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=np1:table1 2023-07-13 03:16:47,957 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39355] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-13 03:16:47,957 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689218207957"}]},"ts":"1689218207957"} 2023-07-13 03:16:47,958 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=DISABLING in hbase:meta 2023-07-13 03:16:47,959 INFO [PEWorker-4] procedure.DisableTableProcedure(293): Set np1:table1 to state=DISABLING 2023-07-13 03:16:47,960 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=np1:table1, region=0cdf20d390633a07b568f3cdc141614f, UNASSIGN}] 2023-07-13 03:16:47,961 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=np1:table1, region=0cdf20d390633a07b568f3cdc141614f, UNASSIGN 2023-07-13 03:16:47,961 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=0cdf20d390633a07b568f3cdc141614f, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,34063,1689218203851 2023-07-13 03:16:47,961 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"np1:table1,,1689218207233.0cdf20d390633a07b568f3cdc141614f.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689218207961"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689218207961"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689218207961"}]},"ts":"1689218207961"} 2023-07-13 03:16:47,963 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=22, ppid=21, state=RUNNABLE; CloseRegionProcedure 0cdf20d390633a07b568f3cdc141614f, server=jenkins-hbase20.apache.org,34063,1689218203851}] 2023-07-13 03:16:48,058 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39355] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-13 03:16:48,115 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 0cdf20d390633a07b568f3cdc141614f 2023-07-13 03:16:48,115 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 0cdf20d390633a07b568f3cdc141614f, disabling compactions & flushes 2023-07-13 03:16:48,115 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region np1:table1,,1689218207233.0cdf20d390633a07b568f3cdc141614f. 2023-07-13 03:16:48,115 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on np1:table1,,1689218207233.0cdf20d390633a07b568f3cdc141614f. 2023-07-13 03:16:48,115 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on np1:table1,,1689218207233.0cdf20d390633a07b568f3cdc141614f. after waiting 0 ms 2023-07-13 03:16:48,115 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region np1:table1,,1689218207233.0cdf20d390633a07b568f3cdc141614f. 2023-07-13 03:16:48,119 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/data/np1/table1/0cdf20d390633a07b568f3cdc141614f/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-13 03:16:48,120 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed np1:table1,,1689218207233.0cdf20d390633a07b568f3cdc141614f. 2023-07-13 03:16:48,120 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 0cdf20d390633a07b568f3cdc141614f: 2023-07-13 03:16:48,122 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 0cdf20d390633a07b568f3cdc141614f 2023-07-13 03:16:48,122 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=0cdf20d390633a07b568f3cdc141614f, regionState=CLOSED 2023-07-13 03:16:48,122 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"np1:table1,,1689218207233.0cdf20d390633a07b568f3cdc141614f.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689218208122"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689218208122"}]},"ts":"1689218208122"} 2023-07-13 03:16:48,125 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=22, resume processing ppid=21 2023-07-13 03:16:48,125 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=22, ppid=21, state=SUCCESS; CloseRegionProcedure 0cdf20d390633a07b568f3cdc141614f, server=jenkins-hbase20.apache.org,34063,1689218203851 in 160 msec 2023-07-13 03:16:48,126 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=21, resume processing ppid=20 2023-07-13 03:16:48,126 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=21, ppid=20, state=SUCCESS; TransitRegionStateProcedure table=np1:table1, region=0cdf20d390633a07b568f3cdc141614f, UNASSIGN in 165 msec 2023-07-13 03:16:48,126 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689218208126"}]},"ts":"1689218208126"} 2023-07-13 03:16:48,127 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=DISABLED in hbase:meta 2023-07-13 03:16:48,128 INFO [PEWorker-2] procedure.DisableTableProcedure(305): Set np1:table1 to state=DISABLED 2023-07-13 03:16:48,130 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=20, state=SUCCESS; DisableTableProcedure table=np1:table1 in 175 msec 2023-07-13 03:16:48,259 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39355] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-13 03:16:48,260 INFO [Listener at localhost.localdomain/44085] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: np1:table1, procId: 20 completed 2023-07-13 03:16:48,261 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39355] master.HMaster$5(2228): Client=jenkins//148.251.75.209 delete np1:table1 2023-07-13 03:16:48,262 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39355] procedure2.ProcedureExecutor(1029): Stored pid=23, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=np1:table1 2023-07-13 03:16:48,266 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=23, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=np1:table1 2023-07-13 03:16:48,266 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39355] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'np1:table1' from rsgroup 'default' 2023-07-13 03:16:48,267 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=23, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=np1:table1 2023-07-13 03:16:48,270 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39355] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:48,271 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39355] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-13 03:16:48,274 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/.tmp/data/np1/table1/0cdf20d390633a07b568f3cdc141614f 2023-07-13 03:16:48,276 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39355] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-13 03:16:48,280 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/.tmp/data/np1/table1/0cdf20d390633a07b568f3cdc141614f/fam1, FileablePath, hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/.tmp/data/np1/table1/0cdf20d390633a07b568f3cdc141614f/recovered.edits] 2023-07-13 03:16:48,287 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/.tmp/data/np1/table1/0cdf20d390633a07b568f3cdc141614f/recovered.edits/4.seqid to hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/archive/data/np1/table1/0cdf20d390633a07b568f3cdc141614f/recovered.edits/4.seqid 2023-07-13 03:16:48,289 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/.tmp/data/np1/table1/0cdf20d390633a07b568f3cdc141614f 2023-07-13 03:16:48,290 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived np1:table1 regions 2023-07-13 03:16:48,295 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=23, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=np1:table1 2023-07-13 03:16:48,298 WARN [PEWorker-5] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of np1:table1 from hbase:meta 2023-07-13 03:16:48,300 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(421): Removing 'np1:table1' descriptor. 2023-07-13 03:16:48,302 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=23, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=np1:table1 2023-07-13 03:16:48,302 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(411): Removing 'np1:table1' from region states. 2023-07-13 03:16:48,302 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"np1:table1,,1689218207233.0cdf20d390633a07b568f3cdc141614f.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689218208302"}]},"ts":"9223372036854775807"} 2023-07-13 03:16:48,304 INFO [PEWorker-5] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-13 03:16:48,304 DEBUG [PEWorker-5] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 0cdf20d390633a07b568f3cdc141614f, NAME => 'np1:table1,,1689218207233.0cdf20d390633a07b568f3cdc141614f.', STARTKEY => '', ENDKEY => ''}] 2023-07-13 03:16:48,304 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(415): Marking 'np1:table1' as deleted. 2023-07-13 03:16:48,304 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689218208304"}]},"ts":"9223372036854775807"} 2023-07-13 03:16:48,306 INFO [PEWorker-5] hbase.MetaTableAccessor(1658): Deleted table np1:table1 state from META 2023-07-13 03:16:48,309 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(130): Finished pid=23, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=np1:table1 2023-07-13 03:16:48,311 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=23, state=SUCCESS; DeleteTableProcedure table=np1:table1 in 49 msec 2023-07-13 03:16:48,377 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39355] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-13 03:16:48,378 INFO [Listener at localhost.localdomain/44085] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: np1:table1, procId: 23 completed 2023-07-13 03:16:48,383 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39355] master.HMaster$17(3086): Client=jenkins//148.251.75.209 delete np1 2023-07-13 03:16:48,390 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39355] procedure2.ProcedureExecutor(1029): Stored pid=24, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=np1 2023-07-13 03:16:48,392 INFO [PEWorker-4] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-13 03:16:48,395 INFO [PEWorker-4] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-13 03:16:48,397 INFO [PEWorker-4] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-13 03:16:48,398 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39355] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-13 03:16:48,398 DEBUG [Listener at localhost.localdomain/44085-EventThread] zookeeper.ZKWatcher(600): master:39355-0x1008454bb3b0000, quorum=127.0.0.1:62986, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/np1 2023-07-13 03:16:48,398 DEBUG [Listener at localhost.localdomain/44085-EventThread] zookeeper.ZKWatcher(600): master:39355-0x1008454bb3b0000, quorum=127.0.0.1:62986, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-13 03:16:48,400 INFO [PEWorker-4] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-13 03:16:48,402 INFO [PEWorker-4] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-13 03:16:48,403 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=24, state=SUCCESS; DeleteNamespaceProcedure, namespace=np1 in 19 msec 2023-07-13 03:16:48,499 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39355] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-13 03:16:48,499 INFO [Listener at localhost.localdomain/44085] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-13 03:16:48,500 INFO [Listener at localhost.localdomain/44085] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-13 03:16:48,500 DEBUG [Listener at localhost.localdomain/44085] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x3b5e8ff2 to 127.0.0.1:62986 2023-07-13 03:16:48,500 DEBUG [Listener at localhost.localdomain/44085] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 03:16:48,500 DEBUG [Listener at localhost.localdomain/44085] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-13 03:16:48,500 DEBUG [Listener at localhost.localdomain/44085] util.JVMClusterUtil(257): Found active master hash=836197881, stopped=false 2023-07-13 03:16:48,500 DEBUG [Listener at localhost.localdomain/44085] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-13 03:16:48,500 DEBUG [Listener at localhost.localdomain/44085] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-13 03:16:48,500 DEBUG [Listener at localhost.localdomain/44085] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver 2023-07-13 03:16:48,501 INFO [Listener at localhost.localdomain/44085] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase20.apache.org,39355,1689218203363 2023-07-13 03:16:48,501 DEBUG [Listener at localhost.localdomain/44085-EventThread] zookeeper.ZKWatcher(600): master:39355-0x1008454bb3b0000, quorum=127.0.0.1:62986, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-13 03:16:48,501 DEBUG [Listener at localhost.localdomain/44085-EventThread] zookeeper.ZKWatcher(600): regionserver:38781-0x1008454bb3b0003, quorum=127.0.0.1:62986, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-13 03:16:48,501 DEBUG [Listener at localhost.localdomain/44085-EventThread] zookeeper.ZKWatcher(600): master:39355-0x1008454bb3b0000, quorum=127.0.0.1:62986, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 03:16:48,501 DEBUG [Listener at localhost.localdomain/44085-EventThread] zookeeper.ZKWatcher(600): regionserver:34063-0x1008454bb3b0002, quorum=127.0.0.1:62986, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-13 03:16:48,501 INFO [Listener at localhost.localdomain/44085] procedure2.ProcedureExecutor(629): Stopping 2023-07-13 03:16:48,501 DEBUG [Listener at localhost.localdomain/44085-EventThread] zookeeper.ZKWatcher(600): regionserver:43619-0x1008454bb3b0001, quorum=127.0.0.1:62986, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-13 03:16:48,503 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:38781-0x1008454bb3b0003, quorum=127.0.0.1:62986, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 03:16:48,503 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:39355-0x1008454bb3b0000, quorum=127.0.0.1:62986, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 03:16:48,503 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:34063-0x1008454bb3b0002, quorum=127.0.0.1:62986, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 03:16:48,503 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:43619-0x1008454bb3b0001, quorum=127.0.0.1:62986, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 03:16:48,503 DEBUG [Listener at localhost.localdomain/44085] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x3aa549a3 to 127.0.0.1:62986 2023-07-13 03:16:48,503 DEBUG [Listener at localhost.localdomain/44085] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 03:16:48,504 INFO [Listener at localhost.localdomain/44085] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase20.apache.org,43619,1689218203580' ***** 2023-07-13 03:16:48,504 INFO [Listener at localhost.localdomain/44085] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-13 03:16:48,504 INFO [Listener at localhost.localdomain/44085] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase20.apache.org,34063,1689218203851' ***** 2023-07-13 03:16:48,504 INFO [Listener at localhost.localdomain/44085] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-13 03:16:48,504 INFO [Listener at localhost.localdomain/44085] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase20.apache.org,38781,1689218204060' ***** 2023-07-13 03:16:48,504 INFO [Listener at localhost.localdomain/44085] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-13 03:16:48,504 INFO [RS:0;jenkins-hbase20:43619] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-13 03:16:48,504 INFO [RS:2;jenkins-hbase20:38781] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-13 03:16:48,504 INFO [RS:1;jenkins-hbase20:34063] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-13 03:16:48,514 INFO [RS:0;jenkins-hbase20:43619] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@368a5eaa{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-13 03:16:48,514 INFO [RS:1;jenkins-hbase20:34063] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@325c7eeb{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-13 03:16:48,514 INFO [RS:2;jenkins-hbase20:38781] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@398de0cc{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-13 03:16:48,515 INFO [RS:0;jenkins-hbase20:43619] server.AbstractConnector(383): Stopped ServerConnector@7025d83c{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-13 03:16:48,515 INFO [RS:1;jenkins-hbase20:34063] server.AbstractConnector(383): Stopped ServerConnector@731646f8{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-13 03:16:48,515 INFO [RS:2;jenkins-hbase20:38781] server.AbstractConnector(383): Stopped ServerConnector@48a49787{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-13 03:16:48,515 INFO [RS:0;jenkins-hbase20:43619] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-13 03:16:48,515 INFO [RS:2;jenkins-hbase20:38781] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-13 03:16:48,515 INFO [RS:1;jenkins-hbase20:34063] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-13 03:16:48,516 INFO [RS:0;jenkins-hbase20:43619] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@f9e9e62{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-13 03:16:48,518 INFO [RS:1;jenkins-hbase20:34063] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@3da9a951{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-13 03:16:48,518 INFO [RS:2;jenkins-hbase20:38781] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@79db073d{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-13 03:16:48,518 INFO [RS:1;jenkins-hbase20:34063] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@184ce85b{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9f570025-7ca0-cdf3-6f73-22764e481636/hadoop.log.dir/,STOPPED} 2023-07-13 03:16:48,518 INFO [RS:0;jenkins-hbase20:43619] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@78931022{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9f570025-7ca0-cdf3-6f73-22764e481636/hadoop.log.dir/,STOPPED} 2023-07-13 03:16:48,518 INFO [RS:2;jenkins-hbase20:38781] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6ffd4a0{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9f570025-7ca0-cdf3-6f73-22764e481636/hadoop.log.dir/,STOPPED} 2023-07-13 03:16:48,519 INFO [RS:2;jenkins-hbase20:38781] regionserver.HeapMemoryManager(220): Stopping 2023-07-13 03:16:48,519 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-13 03:16:48,519 INFO [RS:2;jenkins-hbase20:38781] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-13 03:16:48,520 INFO [RS:2;jenkins-hbase20:38781] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-13 03:16:48,520 INFO [RS:2;jenkins-hbase20:38781] regionserver.HRegionServer(3305): Received CLOSE for bcaf4edb61699cc22e3e13e2d72deddc 2023-07-13 03:16:48,520 INFO [RS:1;jenkins-hbase20:34063] regionserver.HeapMemoryManager(220): Stopping 2023-07-13 03:16:48,520 INFO [RS:2;jenkins-hbase20:38781] regionserver.HRegionServer(3305): Received CLOSE for e582e8d4769bf8c2dea6f99da2e9924c 2023-07-13 03:16:48,520 INFO [RS:1;jenkins-hbase20:34063] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-13 03:16:48,520 INFO [RS:2;jenkins-hbase20:38781] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,38781,1689218204060 2023-07-13 03:16:48,520 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-13 03:16:48,520 DEBUG [RS:2;jenkins-hbase20:38781] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x5b9b1de8 to 127.0.0.1:62986 2023-07-13 03:16:48,520 INFO [RS:0;jenkins-hbase20:43619] regionserver.HeapMemoryManager(220): Stopping 2023-07-13 03:16:48,520 INFO [RS:1;jenkins-hbase20:34063] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-13 03:16:48,520 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing bcaf4edb61699cc22e3e13e2d72deddc, disabling compactions & flushes 2023-07-13 03:16:48,522 INFO [RS:1;jenkins-hbase20:34063] regionserver.HRegionServer(3305): Received CLOSE for 9a36231cb34559f193a2957e83cea336 2023-07-13 03:16:48,522 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689218206051.bcaf4edb61699cc22e3e13e2d72deddc. 2023-07-13 03:16:48,522 INFO [RS:1;jenkins-hbase20:34063] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,34063,1689218203851 2023-07-13 03:16:48,522 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 9a36231cb34559f193a2957e83cea336, disabling compactions & flushes 2023-07-13 03:16:48,522 DEBUG [RS:1;jenkins-hbase20:34063] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x7fc985a5 to 127.0.0.1:62986 2023-07-13 03:16:48,522 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-13 03:16:48,522 INFO [RS:0;jenkins-hbase20:43619] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-13 03:16:48,521 DEBUG [RS:2;jenkins-hbase20:38781] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 03:16:48,522 INFO [RS:0;jenkins-hbase20:43619] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-13 03:16:48,522 DEBUG [RS:1;jenkins-hbase20:34063] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 03:16:48,522 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:quota,,1689218206978.9a36231cb34559f193a2957e83cea336. 2023-07-13 03:16:48,523 INFO [RS:1;jenkins-hbase20:34063] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-13 03:16:48,522 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689218206051.bcaf4edb61699cc22e3e13e2d72deddc. 2023-07-13 03:16:48,523 DEBUG [RS:1;jenkins-hbase20:34063] regionserver.HRegionServer(1478): Online Regions={9a36231cb34559f193a2957e83cea336=hbase:quota,,1689218206978.9a36231cb34559f193a2957e83cea336.} 2023-07-13 03:16:48,523 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1689218206978.9a36231cb34559f193a2957e83cea336. 2023-07-13 03:16:48,524 DEBUG [RS:1;jenkins-hbase20:34063] regionserver.HRegionServer(1504): Waiting on 9a36231cb34559f193a2957e83cea336 2023-07-13 03:16:48,523 INFO [RS:0;jenkins-hbase20:43619] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,43619,1689218203580 2023-07-13 03:16:48,523 INFO [RS:2;jenkins-hbase20:38781] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-07-13 03:16:48,524 DEBUG [RS:0;jenkins-hbase20:43619] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x1a20b7da to 127.0.0.1:62986 2023-07-13 03:16:48,524 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1689218206978.9a36231cb34559f193a2957e83cea336. after waiting 0 ms 2023-07-13 03:16:48,523 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689218206051.bcaf4edb61699cc22e3e13e2d72deddc. after waiting 0 ms 2023-07-13 03:16:48,524 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689218206051.bcaf4edb61699cc22e3e13e2d72deddc. 2023-07-13 03:16:48,524 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1689218206978.9a36231cb34559f193a2957e83cea336. 2023-07-13 03:16:48,524 DEBUG [RS:0;jenkins-hbase20:43619] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 03:16:48,524 DEBUG [RS:2;jenkins-hbase20:38781] regionserver.HRegionServer(1478): Online Regions={bcaf4edb61699cc22e3e13e2d72deddc=hbase:rsgroup,,1689218206051.bcaf4edb61699cc22e3e13e2d72deddc., e582e8d4769bf8c2dea6f99da2e9924c=hbase:namespace,,1689218205736.e582e8d4769bf8c2dea6f99da2e9924c.} 2023-07-13 03:16:48,524 INFO [RS:0;jenkins-hbase20:43619] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-13 03:16:48,524 DEBUG [RS:2;jenkins-hbase20:38781] regionserver.HRegionServer(1504): Waiting on bcaf4edb61699cc22e3e13e2d72deddc, e582e8d4769bf8c2dea6f99da2e9924c 2023-07-13 03:16:48,524 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing bcaf4edb61699cc22e3e13e2d72deddc 1/1 column families, dataSize=642 B heapSize=1.10 KB 2023-07-13 03:16:48,524 INFO [RS:0;jenkins-hbase20:43619] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-13 03:16:48,525 INFO [RS:0;jenkins-hbase20:43619] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-13 03:16:48,525 INFO [RS:0;jenkins-hbase20:43619] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-13 03:16:48,525 INFO [RS:0;jenkins-hbase20:43619] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-13 03:16:48,525 DEBUG [RS:0;jenkins-hbase20:43619] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740} 2023-07-13 03:16:48,525 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-13 03:16:48,525 DEBUG [RS:0;jenkins-hbase20:43619] regionserver.HRegionServer(1504): Waiting on 1588230740 2023-07-13 03:16:48,525 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-13 03:16:48,525 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-13 03:16:48,525 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-13 03:16:48,525 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-13 03:16:48,526 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=5.90 KB heapSize=11.10 KB 2023-07-13 03:16:48,529 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/data/hbase/quota/9a36231cb34559f193a2957e83cea336/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-13 03:16:48,529 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:quota,,1689218206978.9a36231cb34559f193a2957e83cea336. 2023-07-13 03:16:48,530 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 9a36231cb34559f193a2957e83cea336: 2023-07-13 03:16:48,530 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed hbase:quota,,1689218206978.9a36231cb34559f193a2957e83cea336. 2023-07-13 03:16:48,552 INFO [regionserver/jenkins-hbase20:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-13 03:16:48,555 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=642 B at sequenceid=7 (bloomFilter=true), to=hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/data/hbase/rsgroup/bcaf4edb61699cc22e3e13e2d72deddc/.tmp/m/014461e1901b4c35827903b8076ac6a1 2023-07-13 03:16:48,561 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=5.27 KB at sequenceid=31 (bloomFilter=false), to=hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/data/hbase/meta/1588230740/.tmp/info/b25b36fc6cad4c15884dd71875d9486d 2023-07-13 03:16:48,561 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/data/hbase/rsgroup/bcaf4edb61699cc22e3e13e2d72deddc/.tmp/m/014461e1901b4c35827903b8076ac6a1 as hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/data/hbase/rsgroup/bcaf4edb61699cc22e3e13e2d72deddc/m/014461e1901b4c35827903b8076ac6a1 2023-07-13 03:16:48,566 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for b25b36fc6cad4c15884dd71875d9486d 2023-07-13 03:16:48,567 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/data/hbase/rsgroup/bcaf4edb61699cc22e3e13e2d72deddc/m/014461e1901b4c35827903b8076ac6a1, entries=1, sequenceid=7, filesize=4.9 K 2023-07-13 03:16:48,570 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~642 B/642, heapSize ~1.09 KB/1112, currentSize=0 B/0 for bcaf4edb61699cc22e3e13e2d72deddc in 46ms, sequenceid=7, compaction requested=false 2023-07-13 03:16:48,570 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-13 03:16:48,573 INFO [regionserver/jenkins-hbase20:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-13 03:16:48,579 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/data/hbase/rsgroup/bcaf4edb61699cc22e3e13e2d72deddc/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=1 2023-07-13 03:16:48,580 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-13 03:16:48,580 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689218206051.bcaf4edb61699cc22e3e13e2d72deddc. 2023-07-13 03:16:48,580 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for bcaf4edb61699cc22e3e13e2d72deddc: 2023-07-13 03:16:48,580 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689218206051.bcaf4edb61699cc22e3e13e2d72deddc. 2023-07-13 03:16:48,581 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing e582e8d4769bf8c2dea6f99da2e9924c, disabling compactions & flushes 2023-07-13 03:16:48,581 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689218205736.e582e8d4769bf8c2dea6f99da2e9924c. 2023-07-13 03:16:48,581 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689218205736.e582e8d4769bf8c2dea6f99da2e9924c. 2023-07-13 03:16:48,581 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689218205736.e582e8d4769bf8c2dea6f99da2e9924c. after waiting 0 ms 2023-07-13 03:16:48,581 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689218205736.e582e8d4769bf8c2dea6f99da2e9924c. 2023-07-13 03:16:48,581 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing e582e8d4769bf8c2dea6f99da2e9924c 1/1 column families, dataSize=215 B heapSize=776 B 2023-07-13 03:16:48,581 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=90 B at sequenceid=31 (bloomFilter=false), to=hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/data/hbase/meta/1588230740/.tmp/rep_barrier/c6308907167f4fa1bcd9145abe002d54 2023-07-13 03:16:48,586 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for c6308907167f4fa1bcd9145abe002d54 2023-07-13 03:16:48,594 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=215 B at sequenceid=8 (bloomFilter=true), to=hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/data/hbase/namespace/e582e8d4769bf8c2dea6f99da2e9924c/.tmp/info/04b69f1a436a46d796c7c7bf0acc5b70 2023-07-13 03:16:48,599 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 04b69f1a436a46d796c7c7bf0acc5b70 2023-07-13 03:16:48,600 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/data/hbase/namespace/e582e8d4769bf8c2dea6f99da2e9924c/.tmp/info/04b69f1a436a46d796c7c7bf0acc5b70 as hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/data/hbase/namespace/e582e8d4769bf8c2dea6f99da2e9924c/info/04b69f1a436a46d796c7c7bf0acc5b70 2023-07-13 03:16:48,603 INFO [regionserver/jenkins-hbase20:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-13 03:16:48,605 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=562 B at sequenceid=31 (bloomFilter=false), to=hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/data/hbase/meta/1588230740/.tmp/table/47f6f0a402cc4372a98fed4fcfc9c93c 2023-07-13 03:16:48,609 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 04b69f1a436a46d796c7c7bf0acc5b70 2023-07-13 03:16:48,609 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/data/hbase/namespace/e582e8d4769bf8c2dea6f99da2e9924c/info/04b69f1a436a46d796c7c7bf0acc5b70, entries=3, sequenceid=8, filesize=5.0 K 2023-07-13 03:16:48,610 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 47f6f0a402cc4372a98fed4fcfc9c93c 2023-07-13 03:16:48,611 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~215 B/215, heapSize ~760 B/760, currentSize=0 B/0 for e582e8d4769bf8c2dea6f99da2e9924c in 30ms, sequenceid=8, compaction requested=false 2023-07-13 03:16:48,611 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-13 03:16:48,615 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/data/hbase/meta/1588230740/.tmp/info/b25b36fc6cad4c15884dd71875d9486d as hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/data/hbase/meta/1588230740/info/b25b36fc6cad4c15884dd71875d9486d 2023-07-13 03:16:48,622 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for b25b36fc6cad4c15884dd71875d9486d 2023-07-13 03:16:48,622 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/data/hbase/meta/1588230740/info/b25b36fc6cad4c15884dd71875d9486d, entries=32, sequenceid=31, filesize=8.5 K 2023-07-13 03:16:48,623 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/data/hbase/meta/1588230740/.tmp/rep_barrier/c6308907167f4fa1bcd9145abe002d54 as hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/data/hbase/meta/1588230740/rep_barrier/c6308907167f4fa1bcd9145abe002d54 2023-07-13 03:16:48,629 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/data/hbase/namespace/e582e8d4769bf8c2dea6f99da2e9924c/recovered.edits/11.seqid, newMaxSeqId=11, maxSeqId=1 2023-07-13 03:16:48,630 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689218205736.e582e8d4769bf8c2dea6f99da2e9924c. 2023-07-13 03:16:48,630 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for e582e8d4769bf8c2dea6f99da2e9924c: 2023-07-13 03:16:48,630 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689218205736.e582e8d4769bf8c2dea6f99da2e9924c. 2023-07-13 03:16:48,631 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for c6308907167f4fa1bcd9145abe002d54 2023-07-13 03:16:48,631 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/data/hbase/meta/1588230740/rep_barrier/c6308907167f4fa1bcd9145abe002d54, entries=1, sequenceid=31, filesize=4.9 K 2023-07-13 03:16:48,632 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/data/hbase/meta/1588230740/.tmp/table/47f6f0a402cc4372a98fed4fcfc9c93c as hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/data/hbase/meta/1588230740/table/47f6f0a402cc4372a98fed4fcfc9c93c 2023-07-13 03:16:48,640 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 47f6f0a402cc4372a98fed4fcfc9c93c 2023-07-13 03:16:48,640 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/data/hbase/meta/1588230740/table/47f6f0a402cc4372a98fed4fcfc9c93c, entries=8, sequenceid=31, filesize=5.2 K 2023-07-13 03:16:48,641 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~5.90 KB/6045, heapSize ~11.05 KB/11320, currentSize=0 B/0 for 1588230740 in 116ms, sequenceid=31, compaction requested=false 2023-07-13 03:16:48,641 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-13 03:16:48,661 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/data/hbase/meta/1588230740/recovered.edits/34.seqid, newMaxSeqId=34, maxSeqId=1 2023-07-13 03:16:48,662 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-13 03:16:48,662 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-13 03:16:48,662 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-13 03:16:48,663 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-13 03:16:48,724 INFO [RS:1;jenkins-hbase20:34063] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,34063,1689218203851; all regions closed. 2023-07-13 03:16:48,724 DEBUG [RS:1;jenkins-hbase20:34063] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-13 03:16:48,724 INFO [RS:2;jenkins-hbase20:38781] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,38781,1689218204060; all regions closed. 2023-07-13 03:16:48,724 DEBUG [RS:2;jenkins-hbase20:38781] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-13 03:16:48,725 INFO [RS:0;jenkins-hbase20:43619] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,43619,1689218203580; all regions closed. 2023-07-13 03:16:48,725 DEBUG [RS:0;jenkins-hbase20:43619] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-13 03:16:48,742 WARN [Close-WAL-Writer-0] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(641): complete file /user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/WALs/jenkins-hbase20.apache.org,43619,1689218203580/jenkins-hbase20.apache.org%2C43619%2C1689218203580.meta.1689218205591.meta not finished, retry = 0 2023-07-13 03:16:48,743 DEBUG [RS:1;jenkins-hbase20:34063] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/oldWALs 2023-07-13 03:16:48,743 INFO [RS:1;jenkins-hbase20:34063] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase20.apache.org%2C34063%2C1689218203851:(num 1689218205345) 2023-07-13 03:16:48,743 DEBUG [RS:1;jenkins-hbase20:34063] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 03:16:48,743 INFO [RS:1;jenkins-hbase20:34063] regionserver.LeaseManager(133): Closed leases 2023-07-13 03:16:48,744 INFO [RS:1;jenkins-hbase20:34063] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase20:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-13 03:16:48,744 INFO [RS:1;jenkins-hbase20:34063] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-13 03:16:48,744 INFO [RS:1;jenkins-hbase20:34063] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-13 03:16:48,744 INFO [RS:1;jenkins-hbase20:34063] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-13 03:16:48,745 INFO [RS:1;jenkins-hbase20:34063] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:34063 2023-07-13 03:16:48,746 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-13 03:16:48,750 DEBUG [RS:2;jenkins-hbase20:38781] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/oldWALs 2023-07-13 03:16:48,750 DEBUG [Listener at localhost.localdomain/44085-EventThread] zookeeper.ZKWatcher(600): regionserver:38781-0x1008454bb3b0003, quorum=127.0.0.1:62986, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,34063,1689218203851 2023-07-13 03:16:48,750 INFO [RS:2;jenkins-hbase20:38781] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase20.apache.org%2C38781%2C1689218204060:(num 1689218205348) 2023-07-13 03:16:48,750 DEBUG [Listener at localhost.localdomain/44085-EventThread] zookeeper.ZKWatcher(600): master:39355-0x1008454bb3b0000, quorum=127.0.0.1:62986, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 03:16:48,750 DEBUG [Listener at localhost.localdomain/44085-EventThread] zookeeper.ZKWatcher(600): regionserver:43619-0x1008454bb3b0001, quorum=127.0.0.1:62986, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,34063,1689218203851 2023-07-13 03:16:48,750 DEBUG [RS:2;jenkins-hbase20:38781] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 03:16:48,750 DEBUG [Listener at localhost.localdomain/44085-EventThread] zookeeper.ZKWatcher(600): regionserver:34063-0x1008454bb3b0002, quorum=127.0.0.1:62986, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,34063,1689218203851 2023-07-13 03:16:48,750 DEBUG [Listener at localhost.localdomain/44085-EventThread] zookeeper.ZKWatcher(600): regionserver:38781-0x1008454bb3b0003, quorum=127.0.0.1:62986, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 03:16:48,751 DEBUG [Listener at localhost.localdomain/44085-EventThread] zookeeper.ZKWatcher(600): regionserver:34063-0x1008454bb3b0002, quorum=127.0.0.1:62986, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 03:16:48,750 INFO [RS:2;jenkins-hbase20:38781] regionserver.LeaseManager(133): Closed leases 2023-07-13 03:16:48,750 DEBUG [Listener at localhost.localdomain/44085-EventThread] zookeeper.ZKWatcher(600): regionserver:43619-0x1008454bb3b0001, quorum=127.0.0.1:62986, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 03:16:48,751 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase20.apache.org,34063,1689218203851] 2023-07-13 03:16:48,751 INFO [RS:2;jenkins-hbase20:38781] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase20:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-13 03:16:48,751 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase20.apache.org,34063,1689218203851; numProcessing=1 2023-07-13 03:16:48,751 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-13 03:16:48,751 INFO [RS:2;jenkins-hbase20:38781] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-13 03:16:48,751 INFO [RS:2;jenkins-hbase20:38781] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-13 03:16:48,751 INFO [RS:2;jenkins-hbase20:38781] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-13 03:16:48,753 INFO [RS:2;jenkins-hbase20:38781] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:38781 2023-07-13 03:16:48,753 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase20.apache.org,34063,1689218203851 already deleted, retry=false 2023-07-13 03:16:48,753 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase20.apache.org,34063,1689218203851 expired; onlineServers=2 2023-07-13 03:16:48,755 DEBUG [Listener at localhost.localdomain/44085-EventThread] zookeeper.ZKWatcher(600): regionserver:43619-0x1008454bb3b0001, quorum=127.0.0.1:62986, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,38781,1689218204060 2023-07-13 03:16:48,755 DEBUG [Listener at localhost.localdomain/44085-EventThread] zookeeper.ZKWatcher(600): regionserver:38781-0x1008454bb3b0003, quorum=127.0.0.1:62986, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,38781,1689218204060 2023-07-13 03:16:48,755 DEBUG [Listener at localhost.localdomain/44085-EventThread] zookeeper.ZKWatcher(600): master:39355-0x1008454bb3b0000, quorum=127.0.0.1:62986, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 03:16:48,756 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase20.apache.org,38781,1689218204060] 2023-07-13 03:16:48,756 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase20.apache.org,38781,1689218204060; numProcessing=2 2023-07-13 03:16:48,757 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase20.apache.org,38781,1689218204060 already deleted, retry=false 2023-07-13 03:16:48,757 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase20.apache.org,38781,1689218204060 expired; onlineServers=1 2023-07-13 03:16:48,845 DEBUG [RS:0;jenkins-hbase20:43619] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/oldWALs 2023-07-13 03:16:48,845 INFO [RS:0;jenkins-hbase20:43619] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase20.apache.org%2C43619%2C1689218203580.meta:.meta(num 1689218205591) 2023-07-13 03:16:48,851 WARN [Close-WAL-Writer-0] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(641): complete file /user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/WALs/jenkins-hbase20.apache.org,43619,1689218203580/jenkins-hbase20.apache.org%2C43619%2C1689218203580.1689218205367 not finished, retry = 0 2023-07-13 03:16:48,954 DEBUG [RS:0;jenkins-hbase20:43619] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/oldWALs 2023-07-13 03:16:48,954 INFO [RS:0;jenkins-hbase20:43619] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase20.apache.org%2C43619%2C1689218203580:(num 1689218205367) 2023-07-13 03:16:48,954 DEBUG [RS:0;jenkins-hbase20:43619] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 03:16:48,954 INFO [RS:0;jenkins-hbase20:43619] regionserver.LeaseManager(133): Closed leases 2023-07-13 03:16:48,954 INFO [RS:0;jenkins-hbase20:43619] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase20:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-13 03:16:48,954 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-13 03:16:48,955 INFO [RS:0;jenkins-hbase20:43619] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:43619 2023-07-13 03:16:48,958 DEBUG [Listener at localhost.localdomain/44085-EventThread] zookeeper.ZKWatcher(600): master:39355-0x1008454bb3b0000, quorum=127.0.0.1:62986, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 03:16:48,958 DEBUG [Listener at localhost.localdomain/44085-EventThread] zookeeper.ZKWatcher(600): regionserver:43619-0x1008454bb3b0001, quorum=127.0.0.1:62986, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,43619,1689218203580 2023-07-13 03:16:48,959 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase20.apache.org,43619,1689218203580] 2023-07-13 03:16:48,959 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase20.apache.org,43619,1689218203580; numProcessing=3 2023-07-13 03:16:48,960 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase20.apache.org,43619,1689218203580 already deleted, retry=false 2023-07-13 03:16:48,960 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase20.apache.org,43619,1689218203580 expired; onlineServers=0 2023-07-13 03:16:48,960 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase20.apache.org,39355,1689218203363' ***** 2023-07-13 03:16:48,960 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-13 03:16:48,960 DEBUG [M:0;jenkins-hbase20:39355] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@35e8cf1b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-07-13 03:16:48,960 INFO [M:0;jenkins-hbase20:39355] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-13 03:16:48,963 INFO [M:0;jenkins-hbase20:39355] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@40a68960{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-13 03:16:48,964 INFO [M:0;jenkins-hbase20:39355] server.AbstractConnector(383): Stopped ServerConnector@3bfe2510{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-13 03:16:48,964 INFO [M:0;jenkins-hbase20:39355] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-13 03:16:48,964 INFO [M:0;jenkins-hbase20:39355] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@640210ce{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-13 03:16:48,964 DEBUG [Listener at localhost.localdomain/44085-EventThread] zookeeper.ZKWatcher(600): master:39355-0x1008454bb3b0000, quorum=127.0.0.1:62986, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-13 03:16:48,964 DEBUG [Listener at localhost.localdomain/44085-EventThread] zookeeper.ZKWatcher(600): master:39355-0x1008454bb3b0000, quorum=127.0.0.1:62986, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 03:16:48,964 INFO [M:0;jenkins-hbase20:39355] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@3f9a90e1{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9f570025-7ca0-cdf3-6f73-22764e481636/hadoop.log.dir/,STOPPED} 2023-07-13 03:16:48,965 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:39355-0x1008454bb3b0000, quorum=127.0.0.1:62986, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-13 03:16:48,971 INFO [M:0;jenkins-hbase20:39355] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,39355,1689218203363 2023-07-13 03:16:48,971 INFO [M:0;jenkins-hbase20:39355] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,39355,1689218203363; all regions closed. 2023-07-13 03:16:48,971 DEBUG [M:0;jenkins-hbase20:39355] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 03:16:48,971 INFO [M:0;jenkins-hbase20:39355] master.HMaster(1491): Stopping master jetty server 2023-07-13 03:16:48,971 INFO [M:0;jenkins-hbase20:39355] server.AbstractConnector(383): Stopped ServerConnector@66207073{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-13 03:16:48,972 DEBUG [M:0;jenkins-hbase20:39355] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-13 03:16:48,972 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-13 03:16:48,972 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1689218204971] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1689218204971,5,FailOnTimeoutGroup] 2023-07-13 03:16:48,972 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1689218204966] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1689218204966,5,FailOnTimeoutGroup] 2023-07-13 03:16:48,972 DEBUG [M:0;jenkins-hbase20:39355] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-13 03:16:48,974 INFO [M:0;jenkins-hbase20:39355] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-13 03:16:48,974 INFO [M:0;jenkins-hbase20:39355] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-13 03:16:48,974 INFO [M:0;jenkins-hbase20:39355] hbase.ChoreService(369): Chore service for: master/jenkins-hbase20:0 had [ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS] on shutdown 2023-07-13 03:16:48,974 DEBUG [M:0;jenkins-hbase20:39355] master.HMaster(1512): Stopping service threads 2023-07-13 03:16:48,975 INFO [M:0;jenkins-hbase20:39355] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-13 03:16:48,975 ERROR [M:0;jenkins-hbase20:39355] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-13 03:16:48,976 INFO [M:0;jenkins-hbase20:39355] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-13 03:16:48,976 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-13 03:16:48,976 DEBUG [M:0;jenkins-hbase20:39355] zookeeper.ZKUtil(398): master:39355-0x1008454bb3b0000, quorum=127.0.0.1:62986, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-13 03:16:48,976 WARN [M:0;jenkins-hbase20:39355] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-13 03:16:48,976 INFO [M:0;jenkins-hbase20:39355] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-13 03:16:48,977 INFO [M:0;jenkins-hbase20:39355] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-13 03:16:48,977 DEBUG [M:0;jenkins-hbase20:39355] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-13 03:16:48,977 INFO [M:0;jenkins-hbase20:39355] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-13 03:16:48,977 DEBUG [M:0;jenkins-hbase20:39355] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-13 03:16:48,977 DEBUG [M:0;jenkins-hbase20:39355] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-13 03:16:48,977 DEBUG [M:0;jenkins-hbase20:39355] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-13 03:16:48,977 INFO [M:0;jenkins-hbase20:39355] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=93.08 KB heapSize=109.23 KB 2023-07-13 03:16:49,004 DEBUG [Listener at localhost.localdomain/44085-EventThread] zookeeper.ZKWatcher(600): regionserver:38781-0x1008454bb3b0003, quorum=127.0.0.1:62986, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 03:16:49,004 INFO [RS:2;jenkins-hbase20:38781] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,38781,1689218204060; zookeeper connection closed. 2023-07-13 03:16:49,004 DEBUG [Listener at localhost.localdomain/44085-EventThread] zookeeper.ZKWatcher(600): regionserver:38781-0x1008454bb3b0003, quorum=127.0.0.1:62986, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 03:16:49,006 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@4698dcb1] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@4698dcb1 2023-07-13 03:16:49,007 INFO [M:0;jenkins-hbase20:39355] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=93.08 KB at sequenceid=194 (bloomFilter=true), to=hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/bf1e7c2cc9664a99bb51d65e5945cd42 2023-07-13 03:16:49,014 DEBUG [M:0;jenkins-hbase20:39355] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/bf1e7c2cc9664a99bb51d65e5945cd42 as hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/bf1e7c2cc9664a99bb51d65e5945cd42 2023-07-13 03:16:49,031 INFO [M:0;jenkins-hbase20:39355] regionserver.HStore(1080): Added hdfs://localhost.localdomain:40633/user/jenkins/test-data/e5da33bf-c309-9716-bffa-8c3cfc5525e5/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/bf1e7c2cc9664a99bb51d65e5945cd42, entries=24, sequenceid=194, filesize=12.4 K 2023-07-13 03:16:49,032 INFO [M:0;jenkins-hbase20:39355] regionserver.HRegion(2948): Finished flush of dataSize ~93.08 KB/95315, heapSize ~109.22 KB/111840, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 55ms, sequenceid=194, compaction requested=false 2023-07-13 03:16:49,039 INFO [M:0;jenkins-hbase20:39355] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-13 03:16:49,039 DEBUG [M:0;jenkins-hbase20:39355] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-13 03:16:49,049 INFO [M:0;jenkins-hbase20:39355] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-13 03:16:49,049 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-13 03:16:49,050 INFO [M:0;jenkins-hbase20:39355] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:39355 2023-07-13 03:16:49,051 DEBUG [M:0;jenkins-hbase20:39355] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase20.apache.org,39355,1689218203363 already deleted, retry=false 2023-07-13 03:16:49,104 DEBUG [Listener at localhost.localdomain/44085-EventThread] zookeeper.ZKWatcher(600): regionserver:34063-0x1008454bb3b0002, quorum=127.0.0.1:62986, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 03:16:49,104 DEBUG [Listener at localhost.localdomain/44085-EventThread] zookeeper.ZKWatcher(600): regionserver:34063-0x1008454bb3b0002, quorum=127.0.0.1:62986, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 03:16:49,105 INFO [RS:1;jenkins-hbase20:34063] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,34063,1689218203851; zookeeper connection closed. 2023-07-13 03:16:49,113 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@2d577b55] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@2d577b55 2023-07-13 03:16:49,204 DEBUG [Listener at localhost.localdomain/44085-EventThread] zookeeper.ZKWatcher(600): master:39355-0x1008454bb3b0000, quorum=127.0.0.1:62986, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 03:16:49,205 DEBUG [Listener at localhost.localdomain/44085-EventThread] zookeeper.ZKWatcher(600): master:39355-0x1008454bb3b0000, quorum=127.0.0.1:62986, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 03:16:49,205 INFO [M:0;jenkins-hbase20:39355] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,39355,1689218203363; zookeeper connection closed. 2023-07-13 03:16:49,305 DEBUG [Listener at localhost.localdomain/44085-EventThread] zookeeper.ZKWatcher(600): regionserver:43619-0x1008454bb3b0001, quorum=127.0.0.1:62986, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 03:16:49,305 INFO [RS:0;jenkins-hbase20:43619] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,43619,1689218203580; zookeeper connection closed. 2023-07-13 03:16:49,305 DEBUG [Listener at localhost.localdomain/44085-EventThread] zookeeper.ZKWatcher(600): regionserver:43619-0x1008454bb3b0001, quorum=127.0.0.1:62986, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 03:16:49,305 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@622fe1cb] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@622fe1cb 2023-07-13 03:16:49,305 INFO [Listener at localhost.localdomain/44085] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 3 regionserver(s) complete 2023-07-13 03:16:49,305 WARN [Listener at localhost.localdomain/44085] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-13 03:16:49,310 INFO [Listener at localhost.localdomain/44085] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-13 03:16:49,416 WARN [BP-2131735286-148.251.75.209-1689218202298 heartbeating to localhost.localdomain/127.0.0.1:40633] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-13 03:16:49,416 WARN [BP-2131735286-148.251.75.209-1689218202298 heartbeating to localhost.localdomain/127.0.0.1:40633] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-2131735286-148.251.75.209-1689218202298 (Datanode Uuid 5b073fb3-8b5d-4ba4-ac60-50ff0a5d0725) service to localhost.localdomain/127.0.0.1:40633 2023-07-13 03:16:49,416 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9f570025-7ca0-cdf3-6f73-22764e481636/cluster_253178af-242e-c4f7-9f5d-96fe9741eada/dfs/data/data5/current/BP-2131735286-148.251.75.209-1689218202298] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-13 03:16:49,417 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9f570025-7ca0-cdf3-6f73-22764e481636/cluster_253178af-242e-c4f7-9f5d-96fe9741eada/dfs/data/data6/current/BP-2131735286-148.251.75.209-1689218202298] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-13 03:16:49,418 WARN [Listener at localhost.localdomain/44085] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-13 03:16:49,421 INFO [Listener at localhost.localdomain/44085] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-13 03:16:49,531 WARN [BP-2131735286-148.251.75.209-1689218202298 heartbeating to localhost.localdomain/127.0.0.1:40633] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-13 03:16:49,532 WARN [BP-2131735286-148.251.75.209-1689218202298 heartbeating to localhost.localdomain/127.0.0.1:40633] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-2131735286-148.251.75.209-1689218202298 (Datanode Uuid ccdcbc3d-adc9-4167-9ab3-4e0d9618f958) service to localhost.localdomain/127.0.0.1:40633 2023-07-13 03:16:49,533 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9f570025-7ca0-cdf3-6f73-22764e481636/cluster_253178af-242e-c4f7-9f5d-96fe9741eada/dfs/data/data3/current/BP-2131735286-148.251.75.209-1689218202298] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-13 03:16:49,534 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9f570025-7ca0-cdf3-6f73-22764e481636/cluster_253178af-242e-c4f7-9f5d-96fe9741eada/dfs/data/data4/current/BP-2131735286-148.251.75.209-1689218202298] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-13 03:16:49,537 WARN [Listener at localhost.localdomain/44085] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-13 03:16:49,545 INFO [Listener at localhost.localdomain/44085] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-13 03:16:49,648 WARN [BP-2131735286-148.251.75.209-1689218202298 heartbeating to localhost.localdomain/127.0.0.1:40633] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-13 03:16:49,649 WARN [BP-2131735286-148.251.75.209-1689218202298 heartbeating to localhost.localdomain/127.0.0.1:40633] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-2131735286-148.251.75.209-1689218202298 (Datanode Uuid b77c83ae-9641-4299-80d1-81ced1e96813) service to localhost.localdomain/127.0.0.1:40633 2023-07-13 03:16:49,650 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9f570025-7ca0-cdf3-6f73-22764e481636/cluster_253178af-242e-c4f7-9f5d-96fe9741eada/dfs/data/data1/current/BP-2131735286-148.251.75.209-1689218202298] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-13 03:16:49,650 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9f570025-7ca0-cdf3-6f73-22764e481636/cluster_253178af-242e-c4f7-9f5d-96fe9741eada/dfs/data/data2/current/BP-2131735286-148.251.75.209-1689218202298] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-13 03:16:49,661 INFO [Listener at localhost.localdomain/44085] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:0 2023-07-13 03:16:49,780 INFO [Listener at localhost.localdomain/44085] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-13 03:16:49,814 INFO [Listener at localhost.localdomain/44085] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-07-13 03:16:49,814 INFO [Listener at localhost.localdomain/44085] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-13 03:16:49,815 INFO [Listener at localhost.localdomain/44085] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9f570025-7ca0-cdf3-6f73-22764e481636/hadoop.log.dir so I do NOT create it in target/test-data/f27a0f06-9cf1-37de-e2c0-e7e0a0f251a8 2023-07-13 03:16:49,815 INFO [Listener at localhost.localdomain/44085] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9f570025-7ca0-cdf3-6f73-22764e481636/hadoop.tmp.dir so I do NOT create it in target/test-data/f27a0f06-9cf1-37de-e2c0-e7e0a0f251a8 2023-07-13 03:16:49,815 INFO [Listener at localhost.localdomain/44085] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f27a0f06-9cf1-37de-e2c0-e7e0a0f251a8/cluster_e2bfd937-9134-83af-889e-949b5ecd5c75, deleteOnExit=true 2023-07-13 03:16:49,815 INFO [Listener at localhost.localdomain/44085] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-13 03:16:49,815 INFO [Listener at localhost.localdomain/44085] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f27a0f06-9cf1-37de-e2c0-e7e0a0f251a8/test.cache.data in system properties and HBase conf 2023-07-13 03:16:49,815 INFO [Listener at localhost.localdomain/44085] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f27a0f06-9cf1-37de-e2c0-e7e0a0f251a8/hadoop.tmp.dir in system properties and HBase conf 2023-07-13 03:16:49,815 INFO [Listener at localhost.localdomain/44085] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f27a0f06-9cf1-37de-e2c0-e7e0a0f251a8/hadoop.log.dir in system properties and HBase conf 2023-07-13 03:16:49,815 INFO [Listener at localhost.localdomain/44085] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f27a0f06-9cf1-37de-e2c0-e7e0a0f251a8/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-13 03:16:49,815 INFO [Listener at localhost.localdomain/44085] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f27a0f06-9cf1-37de-e2c0-e7e0a0f251a8/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-13 03:16:49,815 INFO [Listener at localhost.localdomain/44085] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-13 03:16:49,815 DEBUG [Listener at localhost.localdomain/44085] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-13 03:16:49,816 INFO [Listener at localhost.localdomain/44085] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f27a0f06-9cf1-37de-e2c0-e7e0a0f251a8/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-13 03:16:49,816 INFO [Listener at localhost.localdomain/44085] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f27a0f06-9cf1-37de-e2c0-e7e0a0f251a8/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-13 03:16:49,816 INFO [Listener at localhost.localdomain/44085] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f27a0f06-9cf1-37de-e2c0-e7e0a0f251a8/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-13 03:16:49,816 INFO [Listener at localhost.localdomain/44085] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f27a0f06-9cf1-37de-e2c0-e7e0a0f251a8/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-13 03:16:49,816 INFO [Listener at localhost.localdomain/44085] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f27a0f06-9cf1-37de-e2c0-e7e0a0f251a8/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-13 03:16:49,817 INFO [Listener at localhost.localdomain/44085] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f27a0f06-9cf1-37de-e2c0-e7e0a0f251a8/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-13 03:16:49,817 INFO [Listener at localhost.localdomain/44085] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f27a0f06-9cf1-37de-e2c0-e7e0a0f251a8/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-13 03:16:49,817 INFO [Listener at localhost.localdomain/44085] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f27a0f06-9cf1-37de-e2c0-e7e0a0f251a8/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-13 03:16:49,817 INFO [Listener at localhost.localdomain/44085] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f27a0f06-9cf1-37de-e2c0-e7e0a0f251a8/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-13 03:16:49,817 INFO [Listener at localhost.localdomain/44085] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f27a0f06-9cf1-37de-e2c0-e7e0a0f251a8/nfs.dump.dir in system properties and HBase conf 2023-07-13 03:16:49,817 INFO [Listener at localhost.localdomain/44085] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f27a0f06-9cf1-37de-e2c0-e7e0a0f251a8/java.io.tmpdir in system properties and HBase conf 2023-07-13 03:16:49,817 INFO [Listener at localhost.localdomain/44085] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f27a0f06-9cf1-37de-e2c0-e7e0a0f251a8/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-13 03:16:49,818 INFO [Listener at localhost.localdomain/44085] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f27a0f06-9cf1-37de-e2c0-e7e0a0f251a8/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-13 03:16:49,818 INFO [Listener at localhost.localdomain/44085] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f27a0f06-9cf1-37de-e2c0-e7e0a0f251a8/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-13 03:16:49,822 WARN [Listener at localhost.localdomain/44085] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-13 03:16:49,822 WARN [Listener at localhost.localdomain/44085] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-13 03:16:49,858 WARN [Listener at localhost.localdomain/44085] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-13 03:16:49,861 INFO [Listener at localhost.localdomain/44085] log.Slf4jLog(67): jetty-6.1.26 2023-07-13 03:16:49,872 INFO [Listener at localhost.localdomain/44085] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f27a0f06-9cf1-37de-e2c0-e7e0a0f251a8/java.io.tmpdir/Jetty_localhost_localdomain_38327_hdfs____.npw5ir/webapp 2023-07-13 03:16:49,879 DEBUG [Listener at localhost.localdomain/44085-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient-0x1008454bb3b000a, quorum=127.0.0.1:62986, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null 2023-07-13 03:16:49,879 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(630): VerifyingRSGroupAdminClient-0x1008454bb3b000a, quorum=127.0.0.1:62986, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring 2023-07-13 03:16:49,975 INFO [Listener at localhost.localdomain/44085] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:38327 2023-07-13 03:16:49,980 WARN [Listener at localhost.localdomain/44085] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-13 03:16:49,981 WARN [Listener at localhost.localdomain/44085] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-13 03:16:50,039 WARN [Listener at localhost.localdomain/38103] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-13 03:16:50,070 WARN [Listener at localhost.localdomain/38103] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-13 03:16:50,073 WARN [Listener at localhost.localdomain/38103] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-13 03:16:50,074 INFO [Listener at localhost.localdomain/38103] log.Slf4jLog(67): jetty-6.1.26 2023-07-13 03:16:50,082 INFO [Listener at localhost.localdomain/38103] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f27a0f06-9cf1-37de-e2c0-e7e0a0f251a8/java.io.tmpdir/Jetty_localhost_40805_datanode____.or9w7s/webapp 2023-07-13 03:16:50,119 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-13 03:16:50,120 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-13 03:16:50,120 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-13 03:16:50,162 INFO [Listener at localhost.localdomain/38103] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:40805 2023-07-13 03:16:50,170 WARN [Listener at localhost.localdomain/39119] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-13 03:16:50,185 WARN [Listener at localhost.localdomain/39119] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-13 03:16:50,190 WARN [Listener at localhost.localdomain/39119] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-13 03:16:50,192 INFO [Listener at localhost.localdomain/39119] log.Slf4jLog(67): jetty-6.1.26 2023-07-13 03:16:50,198 INFO [Listener at localhost.localdomain/39119] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f27a0f06-9cf1-37de-e2c0-e7e0a0f251a8/java.io.tmpdir/Jetty_localhost_39407_datanode____.jgw32c/webapp 2023-07-13 03:16:50,284 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xc1eaf68522be47fc: Processing first storage report for DS-0dcdd7d6-4135-450d-a1b0-58a88351bb0c from datanode 046b911f-274d-45d8-907b-3d6d7861bded 2023-07-13 03:16:50,284 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xc1eaf68522be47fc: from storage DS-0dcdd7d6-4135-450d-a1b0-58a88351bb0c node DatanodeRegistration(127.0.0.1:34961, datanodeUuid=046b911f-274d-45d8-907b-3d6d7861bded, infoPort=40953, infoSecurePort=0, ipcPort=39119, storageInfo=lv=-57;cid=testClusterID;nsid=482875018;c=1689218209824), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-13 03:16:50,284 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xc1eaf68522be47fc: Processing first storage report for DS-5ee28749-31de-4568-a205-e1660b94dfbf from datanode 046b911f-274d-45d8-907b-3d6d7861bded 2023-07-13 03:16:50,284 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xc1eaf68522be47fc: from storage DS-5ee28749-31de-4568-a205-e1660b94dfbf node DatanodeRegistration(127.0.0.1:34961, datanodeUuid=046b911f-274d-45d8-907b-3d6d7861bded, infoPort=40953, infoSecurePort=0, ipcPort=39119, storageInfo=lv=-57;cid=testClusterID;nsid=482875018;c=1689218209824), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-13 03:16:50,337 INFO [Listener at localhost.localdomain/39119] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:39407 2023-07-13 03:16:50,366 WARN [Listener at localhost.localdomain/32769] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-13 03:16:50,397 WARN [Listener at localhost.localdomain/32769] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-13 03:16:50,400 WARN [Listener at localhost.localdomain/32769] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-13 03:16:50,401 INFO [Listener at localhost.localdomain/32769] log.Slf4jLog(67): jetty-6.1.26 2023-07-13 03:16:50,410 INFO [Listener at localhost.localdomain/32769] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f27a0f06-9cf1-37de-e2c0-e7e0a0f251a8/java.io.tmpdir/Jetty_localhost_45851_datanode____.ic78iu/webapp 2023-07-13 03:16:50,457 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xece912f874aa0cb6: Processing first storage report for DS-fa8e52be-3008-4bea-95b5-c288d99d0c25 from datanode cacb45dd-0f3b-42ba-9e2d-a12a2c7e8a1d 2023-07-13 03:16:50,457 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xece912f874aa0cb6: from storage DS-fa8e52be-3008-4bea-95b5-c288d99d0c25 node DatanodeRegistration(127.0.0.1:37667, datanodeUuid=cacb45dd-0f3b-42ba-9e2d-a12a2c7e8a1d, infoPort=43075, infoSecurePort=0, ipcPort=32769, storageInfo=lv=-57;cid=testClusterID;nsid=482875018;c=1689218209824), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-13 03:16:50,457 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xece912f874aa0cb6: Processing first storage report for DS-9b071aae-01fc-47df-8d2b-6682c90cb2c3 from datanode cacb45dd-0f3b-42ba-9e2d-a12a2c7e8a1d 2023-07-13 03:16:50,457 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xece912f874aa0cb6: from storage DS-9b071aae-01fc-47df-8d2b-6682c90cb2c3 node DatanodeRegistration(127.0.0.1:37667, datanodeUuid=cacb45dd-0f3b-42ba-9e2d-a12a2c7e8a1d, infoPort=43075, infoSecurePort=0, ipcPort=32769, storageInfo=lv=-57;cid=testClusterID;nsid=482875018;c=1689218209824), blocks: 0, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-13 03:16:50,510 INFO [Listener at localhost.localdomain/32769] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45851 2023-07-13 03:16:50,520 WARN [Listener at localhost.localdomain/45255] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-13 03:16:50,631 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x87a32a381bf8cf87: Processing first storage report for DS-d810f180-5a5e-4d9f-9cc8-02217874441c from datanode 0a41fae6-7cb1-4785-bf04-0aa618fdac3d 2023-07-13 03:16:50,631 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x87a32a381bf8cf87: from storage DS-d810f180-5a5e-4d9f-9cc8-02217874441c node DatanodeRegistration(127.0.0.1:38005, datanodeUuid=0a41fae6-7cb1-4785-bf04-0aa618fdac3d, infoPort=34815, infoSecurePort=0, ipcPort=45255, storageInfo=lv=-57;cid=testClusterID;nsid=482875018;c=1689218209824), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-13 03:16:50,631 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x87a32a381bf8cf87: Processing first storage report for DS-f777c724-33b6-4182-aa07-f9c791473bee from datanode 0a41fae6-7cb1-4785-bf04-0aa618fdac3d 2023-07-13 03:16:50,631 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x87a32a381bf8cf87: from storage DS-f777c724-33b6-4182-aa07-f9c791473bee node DatanodeRegistration(127.0.0.1:38005, datanodeUuid=0a41fae6-7cb1-4785-bf04-0aa618fdac3d, infoPort=34815, infoSecurePort=0, ipcPort=45255, storageInfo=lv=-57;cid=testClusterID;nsid=482875018;c=1689218209824), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-13 03:16:50,635 DEBUG [Listener at localhost.localdomain/45255] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f27a0f06-9cf1-37de-e2c0-e7e0a0f251a8 2023-07-13 03:16:50,643 INFO [Listener at localhost.localdomain/45255] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f27a0f06-9cf1-37de-e2c0-e7e0a0f251a8/cluster_e2bfd937-9134-83af-889e-949b5ecd5c75/zookeeper_0, clientPort=57116, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f27a0f06-9cf1-37de-e2c0-e7e0a0f251a8/cluster_e2bfd937-9134-83af-889e-949b5ecd5c75/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f27a0f06-9cf1-37de-e2c0-e7e0a0f251a8/cluster_e2bfd937-9134-83af-889e-949b5ecd5c75/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-13 03:16:50,644 INFO [Listener at localhost.localdomain/45255] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=57116 2023-07-13 03:16:50,644 INFO [Listener at localhost.localdomain/45255] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 03:16:50,645 INFO [Listener at localhost.localdomain/45255] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 03:16:50,689 INFO [Listener at localhost.localdomain/45255] util.FSUtils(471): Created version file at hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8 with version=8 2023-07-13 03:16:50,689 INFO [Listener at localhost.localdomain/45255] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost.localdomain:34135/user/jenkins/test-data/84a82e95-8352-209e-fd77-7e2f09f7f692/hbase-staging 2023-07-13 03:16:50,690 DEBUG [Listener at localhost.localdomain/45255] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-13 03:16:50,690 DEBUG [Listener at localhost.localdomain/45255] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-13 03:16:50,690 DEBUG [Listener at localhost.localdomain/45255] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-13 03:16:50,690 DEBUG [Listener at localhost.localdomain/45255] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-13 03:16:50,691 INFO [Listener at localhost.localdomain/45255] client.ConnectionUtils(127): master/jenkins-hbase20:0 server-side Connection retries=45 2023-07-13 03:16:50,691 INFO [Listener at localhost.localdomain/45255] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 03:16:50,691 INFO [Listener at localhost.localdomain/45255] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-13 03:16:50,691 INFO [Listener at localhost.localdomain/45255] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-13 03:16:50,691 INFO [Listener at localhost.localdomain/45255] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 03:16:50,692 INFO [Listener at localhost.localdomain/45255] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-13 03:16:50,692 INFO [Listener at localhost.localdomain/45255] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-13 03:16:50,692 INFO [Listener at localhost.localdomain/45255] ipc.NettyRpcServer(120): Bind to /148.251.75.209:35861 2023-07-13 03:16:50,693 INFO [Listener at localhost.localdomain/45255] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 03:16:50,694 INFO [Listener at localhost.localdomain/45255] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 03:16:50,695 INFO [Listener at localhost.localdomain/45255] zookeeper.RecoverableZooKeeper(93): Process identifier=master:35861 connecting to ZooKeeper ensemble=127.0.0.1:57116 2023-07-13 03:16:50,711 DEBUG [Listener at localhost.localdomain/45255-EventThread] zookeeper.ZKWatcher(600): master:358610x0, quorum=127.0.0.1:57116, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-13 03:16:50,720 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:35861-0x1008454d7cc0000 connected 2023-07-13 03:16:50,732 DEBUG [Listener at localhost.localdomain/45255] zookeeper.ZKUtil(164): master:35861-0x1008454d7cc0000, quorum=127.0.0.1:57116, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-13 03:16:50,732 DEBUG [Listener at localhost.localdomain/45255] zookeeper.ZKUtil(164): master:35861-0x1008454d7cc0000, quorum=127.0.0.1:57116, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 03:16:50,733 DEBUG [Listener at localhost.localdomain/45255] zookeeper.ZKUtil(164): master:35861-0x1008454d7cc0000, quorum=127.0.0.1:57116, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-13 03:16:50,733 DEBUG [Listener at localhost.localdomain/45255] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=35861 2023-07-13 03:16:50,733 DEBUG [Listener at localhost.localdomain/45255] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=35861 2023-07-13 03:16:50,734 DEBUG [Listener at localhost.localdomain/45255] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=35861 2023-07-13 03:16:50,734 DEBUG [Listener at localhost.localdomain/45255] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=35861 2023-07-13 03:16:50,734 DEBUG [Listener at localhost.localdomain/45255] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=35861 2023-07-13 03:16:50,737 INFO [Listener at localhost.localdomain/45255] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-13 03:16:50,737 INFO [Listener at localhost.localdomain/45255] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-13 03:16:50,737 INFO [Listener at localhost.localdomain/45255] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-13 03:16:50,738 INFO [Listener at localhost.localdomain/45255] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-13 03:16:50,738 INFO [Listener at localhost.localdomain/45255] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-13 03:16:50,738 INFO [Listener at localhost.localdomain/45255] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-13 03:16:50,738 INFO [Listener at localhost.localdomain/45255] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-13 03:16:50,739 INFO [Listener at localhost.localdomain/45255] http.HttpServer(1146): Jetty bound to port 38563 2023-07-13 03:16:50,739 INFO [Listener at localhost.localdomain/45255] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-13 03:16:50,742 INFO [Listener at localhost.localdomain/45255] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 03:16:50,742 INFO [Listener at localhost.localdomain/45255] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@31d636bb{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f27a0f06-9cf1-37de-e2c0-e7e0a0f251a8/hadoop.log.dir/,AVAILABLE} 2023-07-13 03:16:50,742 INFO [Listener at localhost.localdomain/45255] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 03:16:50,743 INFO [Listener at localhost.localdomain/45255] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5ce2c06b{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-13 03:16:50,832 INFO [Listener at localhost.localdomain/45255] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-13 03:16:50,832 INFO [Listener at localhost.localdomain/45255] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-13 03:16:50,833 INFO [Listener at localhost.localdomain/45255] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-13 03:16:50,833 INFO [Listener at localhost.localdomain/45255] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-13 03:16:50,833 INFO [Listener at localhost.localdomain/45255] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 03:16:50,834 INFO [Listener at localhost.localdomain/45255] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@2feb7bcb{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f27a0f06-9cf1-37de-e2c0-e7e0a0f251a8/java.io.tmpdir/jetty-0_0_0_0-38563-hbase-server-2_4_18-SNAPSHOT_jar-_-any-9020573682232968195/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-13 03:16:50,835 INFO [Listener at localhost.localdomain/45255] server.AbstractConnector(333): Started ServerConnector@54a66b85{HTTP/1.1, (http/1.1)}{0.0.0.0:38563} 2023-07-13 03:16:50,835 INFO [Listener at localhost.localdomain/45255] server.Server(415): Started @46367ms 2023-07-13 03:16:50,836 INFO [Listener at localhost.localdomain/45255] master.HMaster(444): hbase.rootdir=hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8, hbase.cluster.distributed=false 2023-07-13 03:16:50,847 INFO [Listener at localhost.localdomain/45255] client.ConnectionUtils(127): regionserver/jenkins-hbase20:0 server-side Connection retries=45 2023-07-13 03:16:50,847 INFO [Listener at localhost.localdomain/45255] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 03:16:50,847 INFO [Listener at localhost.localdomain/45255] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-13 03:16:50,847 INFO [Listener at localhost.localdomain/45255] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-13 03:16:50,847 INFO [Listener at localhost.localdomain/45255] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 03:16:50,847 INFO [Listener at localhost.localdomain/45255] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-13 03:16:50,847 INFO [Listener at localhost.localdomain/45255] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-13 03:16:50,848 INFO [Listener at localhost.localdomain/45255] ipc.NettyRpcServer(120): Bind to /148.251.75.209:46211 2023-07-13 03:16:50,848 INFO [Listener at localhost.localdomain/45255] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-13 03:16:50,849 DEBUG [Listener at localhost.localdomain/45255] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-13 03:16:50,850 INFO [Listener at localhost.localdomain/45255] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 03:16:50,851 INFO [Listener at localhost.localdomain/45255] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 03:16:50,851 INFO [Listener at localhost.localdomain/45255] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:46211 connecting to ZooKeeper ensemble=127.0.0.1:57116 2023-07-13 03:16:50,854 DEBUG [Listener at localhost.localdomain/45255-EventThread] zookeeper.ZKWatcher(600): regionserver:462110x0, quorum=127.0.0.1:57116, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-13 03:16:50,856 DEBUG [Listener at localhost.localdomain/45255] zookeeper.ZKUtil(164): regionserver:462110x0, quorum=127.0.0.1:57116, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-13 03:16:50,856 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:46211-0x1008454d7cc0001 connected 2023-07-13 03:16:50,857 DEBUG [Listener at localhost.localdomain/45255] zookeeper.ZKUtil(164): regionserver:46211-0x1008454d7cc0001, quorum=127.0.0.1:57116, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 03:16:50,857 DEBUG [Listener at localhost.localdomain/45255] zookeeper.ZKUtil(164): regionserver:46211-0x1008454d7cc0001, quorum=127.0.0.1:57116, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-13 03:16:50,858 DEBUG [Listener at localhost.localdomain/45255] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=46211 2023-07-13 03:16:50,859 DEBUG [Listener at localhost.localdomain/45255] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=46211 2023-07-13 03:16:50,859 DEBUG [Listener at localhost.localdomain/45255] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=46211 2023-07-13 03:16:50,859 DEBUG [Listener at localhost.localdomain/45255] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=46211 2023-07-13 03:16:50,859 DEBUG [Listener at localhost.localdomain/45255] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=46211 2023-07-13 03:16:50,862 INFO [Listener at localhost.localdomain/45255] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-13 03:16:50,862 INFO [Listener at localhost.localdomain/45255] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-13 03:16:50,862 INFO [Listener at localhost.localdomain/45255] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-13 03:16:50,863 INFO [Listener at localhost.localdomain/45255] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-13 03:16:50,863 INFO [Listener at localhost.localdomain/45255] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-13 03:16:50,863 INFO [Listener at localhost.localdomain/45255] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-13 03:16:50,863 INFO [Listener at localhost.localdomain/45255] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-13 03:16:50,864 INFO [Listener at localhost.localdomain/45255] http.HttpServer(1146): Jetty bound to port 45379 2023-07-13 03:16:50,864 INFO [Listener at localhost.localdomain/45255] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-13 03:16:50,866 INFO [Listener at localhost.localdomain/45255] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 03:16:50,866 INFO [Listener at localhost.localdomain/45255] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6267750{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f27a0f06-9cf1-37de-e2c0-e7e0a0f251a8/hadoop.log.dir/,AVAILABLE} 2023-07-13 03:16:50,866 INFO [Listener at localhost.localdomain/45255] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 03:16:50,867 INFO [Listener at localhost.localdomain/45255] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@79ad1c66{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-13 03:16:50,955 INFO [Listener at localhost.localdomain/45255] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-13 03:16:50,956 INFO [Listener at localhost.localdomain/45255] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-13 03:16:50,956 INFO [Listener at localhost.localdomain/45255] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-13 03:16:50,956 INFO [Listener at localhost.localdomain/45255] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-13 03:16:50,957 INFO [Listener at localhost.localdomain/45255] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 03:16:50,958 INFO [Listener at localhost.localdomain/45255] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@1f800616{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f27a0f06-9cf1-37de-e2c0-e7e0a0f251a8/java.io.tmpdir/jetty-0_0_0_0-45379-hbase-server-2_4_18-SNAPSHOT_jar-_-any-286316213435389816/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-13 03:16:50,959 INFO [Listener at localhost.localdomain/45255] server.AbstractConnector(333): Started ServerConnector@5406abae{HTTP/1.1, (http/1.1)}{0.0.0.0:45379} 2023-07-13 03:16:50,959 INFO [Listener at localhost.localdomain/45255] server.Server(415): Started @46491ms 2023-07-13 03:16:50,969 INFO [Listener at localhost.localdomain/45255] client.ConnectionUtils(127): regionserver/jenkins-hbase20:0 server-side Connection retries=45 2023-07-13 03:16:50,969 INFO [Listener at localhost.localdomain/45255] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 03:16:50,969 INFO [Listener at localhost.localdomain/45255] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-13 03:16:50,970 INFO [Listener at localhost.localdomain/45255] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-13 03:16:50,970 INFO [Listener at localhost.localdomain/45255] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 03:16:50,970 INFO [Listener at localhost.localdomain/45255] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-13 03:16:50,970 INFO [Listener at localhost.localdomain/45255] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-13 03:16:50,973 INFO [Listener at localhost.localdomain/45255] ipc.NettyRpcServer(120): Bind to /148.251.75.209:36825 2023-07-13 03:16:50,973 INFO [Listener at localhost.localdomain/45255] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-13 03:16:50,975 DEBUG [Listener at localhost.localdomain/45255] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-13 03:16:50,976 INFO [Listener at localhost.localdomain/45255] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 03:16:50,977 INFO [Listener at localhost.localdomain/45255] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 03:16:50,978 INFO [Listener at localhost.localdomain/45255] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:36825 connecting to ZooKeeper ensemble=127.0.0.1:57116 2023-07-13 03:16:50,984 DEBUG [Listener at localhost.localdomain/45255-EventThread] zookeeper.ZKWatcher(600): regionserver:368250x0, quorum=127.0.0.1:57116, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-13 03:16:50,986 DEBUG [Listener at localhost.localdomain/45255] zookeeper.ZKUtil(164): regionserver:368250x0, quorum=127.0.0.1:57116, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-13 03:16:50,987 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:36825-0x1008454d7cc0002 connected 2023-07-13 03:16:50,987 DEBUG [Listener at localhost.localdomain/45255] zookeeper.ZKUtil(164): regionserver:36825-0x1008454d7cc0002, quorum=127.0.0.1:57116, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 03:16:50,988 DEBUG [Listener at localhost.localdomain/45255] zookeeper.ZKUtil(164): regionserver:36825-0x1008454d7cc0002, quorum=127.0.0.1:57116, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-13 03:16:50,988 DEBUG [Listener at localhost.localdomain/45255] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=36825 2023-07-13 03:16:50,988 DEBUG [Listener at localhost.localdomain/45255] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=36825 2023-07-13 03:16:50,989 DEBUG [Listener at localhost.localdomain/45255] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=36825 2023-07-13 03:16:50,989 DEBUG [Listener at localhost.localdomain/45255] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=36825 2023-07-13 03:16:50,991 DEBUG [Listener at localhost.localdomain/45255] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=36825 2023-07-13 03:16:50,993 INFO [Listener at localhost.localdomain/45255] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-13 03:16:50,993 INFO [Listener at localhost.localdomain/45255] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-13 03:16:50,993 INFO [Listener at localhost.localdomain/45255] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-13 03:16:50,994 INFO [Listener at localhost.localdomain/45255] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-13 03:16:50,994 INFO [Listener at localhost.localdomain/45255] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-13 03:16:50,994 INFO [Listener at localhost.localdomain/45255] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-13 03:16:50,994 INFO [Listener at localhost.localdomain/45255] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-13 03:16:50,995 INFO [Listener at localhost.localdomain/45255] http.HttpServer(1146): Jetty bound to port 38849 2023-07-13 03:16:50,995 INFO [Listener at localhost.localdomain/45255] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-13 03:16:50,997 INFO [Listener at localhost.localdomain/45255] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 03:16:50,997 INFO [Listener at localhost.localdomain/45255] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6da034c6{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f27a0f06-9cf1-37de-e2c0-e7e0a0f251a8/hadoop.log.dir/,AVAILABLE} 2023-07-13 03:16:50,997 INFO [Listener at localhost.localdomain/45255] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 03:16:50,998 INFO [Listener at localhost.localdomain/45255] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7c0bbd48{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-13 03:16:51,092 INFO [Listener at localhost.localdomain/45255] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-13 03:16:51,093 INFO [Listener at localhost.localdomain/45255] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-13 03:16:51,093 INFO [Listener at localhost.localdomain/45255] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-13 03:16:51,093 INFO [Listener at localhost.localdomain/45255] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-13 03:16:51,094 INFO [Listener at localhost.localdomain/45255] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 03:16:51,094 INFO [Listener at localhost.localdomain/45255] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@1dbe62ba{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f27a0f06-9cf1-37de-e2c0-e7e0a0f251a8/java.io.tmpdir/jetty-0_0_0_0-38849-hbase-server-2_4_18-SNAPSHOT_jar-_-any-3859423322531559256/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-13 03:16:51,096 INFO [Listener at localhost.localdomain/45255] server.AbstractConnector(333): Started ServerConnector@5579b584{HTTP/1.1, (http/1.1)}{0.0.0.0:38849} 2023-07-13 03:16:51,097 INFO [Listener at localhost.localdomain/45255] server.Server(415): Started @46629ms 2023-07-13 03:16:51,105 INFO [Listener at localhost.localdomain/45255] client.ConnectionUtils(127): regionserver/jenkins-hbase20:0 server-side Connection retries=45 2023-07-13 03:16:51,106 INFO [Listener at localhost.localdomain/45255] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 03:16:51,106 INFO [Listener at localhost.localdomain/45255] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-13 03:16:51,106 INFO [Listener at localhost.localdomain/45255] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-13 03:16:51,106 INFO [Listener at localhost.localdomain/45255] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 03:16:51,106 INFO [Listener at localhost.localdomain/45255] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-13 03:16:51,106 INFO [Listener at localhost.localdomain/45255] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-13 03:16:51,107 INFO [Listener at localhost.localdomain/45255] ipc.NettyRpcServer(120): Bind to /148.251.75.209:34353 2023-07-13 03:16:51,107 INFO [Listener at localhost.localdomain/45255] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-13 03:16:51,108 DEBUG [Listener at localhost.localdomain/45255] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-13 03:16:51,109 INFO [Listener at localhost.localdomain/45255] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 03:16:51,110 INFO [Listener at localhost.localdomain/45255] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 03:16:51,110 INFO [Listener at localhost.localdomain/45255] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:34353 connecting to ZooKeeper ensemble=127.0.0.1:57116 2023-07-13 03:16:51,113 DEBUG [Listener at localhost.localdomain/45255-EventThread] zookeeper.ZKWatcher(600): regionserver:343530x0, quorum=127.0.0.1:57116, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-13 03:16:51,115 DEBUG [Listener at localhost.localdomain/45255] zookeeper.ZKUtil(164): regionserver:343530x0, quorum=127.0.0.1:57116, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-13 03:16:51,116 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:34353-0x1008454d7cc0003 connected 2023-07-13 03:16:51,116 DEBUG [Listener at localhost.localdomain/45255] zookeeper.ZKUtil(164): regionserver:34353-0x1008454d7cc0003, quorum=127.0.0.1:57116, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 03:16:51,117 DEBUG [Listener at localhost.localdomain/45255] zookeeper.ZKUtil(164): regionserver:34353-0x1008454d7cc0003, quorum=127.0.0.1:57116, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-13 03:16:51,118 DEBUG [Listener at localhost.localdomain/45255] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=34353 2023-07-13 03:16:51,119 DEBUG [Listener at localhost.localdomain/45255] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=34353 2023-07-13 03:16:51,119 DEBUG [Listener at localhost.localdomain/45255] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=34353 2023-07-13 03:16:51,119 DEBUG [Listener at localhost.localdomain/45255] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=34353 2023-07-13 03:16:51,119 DEBUG [Listener at localhost.localdomain/45255] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=34353 2023-07-13 03:16:51,121 INFO [Listener at localhost.localdomain/45255] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-13 03:16:51,121 INFO [Listener at localhost.localdomain/45255] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-13 03:16:51,121 INFO [Listener at localhost.localdomain/45255] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-13 03:16:51,121 INFO [Listener at localhost.localdomain/45255] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-13 03:16:51,121 INFO [Listener at localhost.localdomain/45255] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-13 03:16:51,121 INFO [Listener at localhost.localdomain/45255] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-13 03:16:51,121 INFO [Listener at localhost.localdomain/45255] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-13 03:16:51,122 INFO [Listener at localhost.localdomain/45255] http.HttpServer(1146): Jetty bound to port 36639 2023-07-13 03:16:51,122 INFO [Listener at localhost.localdomain/45255] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-13 03:16:51,123 INFO [Listener at localhost.localdomain/45255] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 03:16:51,123 INFO [Listener at localhost.localdomain/45255] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@76205173{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f27a0f06-9cf1-37de-e2c0-e7e0a0f251a8/hadoop.log.dir/,AVAILABLE} 2023-07-13 03:16:51,123 INFO [Listener at localhost.localdomain/45255] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 03:16:51,123 INFO [Listener at localhost.localdomain/45255] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@71b1b09d{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-13 03:16:51,181 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-07-13 03:16:51,232 INFO [Listener at localhost.localdomain/45255] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-13 03:16:51,232 INFO [Listener at localhost.localdomain/45255] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-13 03:16:51,232 INFO [Listener at localhost.localdomain/45255] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-13 03:16:51,233 INFO [Listener at localhost.localdomain/45255] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-13 03:16:51,238 INFO [Listener at localhost.localdomain/45255] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 03:16:51,239 INFO [Listener at localhost.localdomain/45255] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@4398561f{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f27a0f06-9cf1-37de-e2c0-e7e0a0f251a8/java.io.tmpdir/jetty-0_0_0_0-36639-hbase-server-2_4_18-SNAPSHOT_jar-_-any-8262978838284092861/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-13 03:16:51,241 INFO [Listener at localhost.localdomain/45255] server.AbstractConnector(333): Started ServerConnector@40ba18c9{HTTP/1.1, (http/1.1)}{0.0.0.0:36639} 2023-07-13 03:16:51,242 INFO [Listener at localhost.localdomain/45255] server.Server(415): Started @46774ms 2023-07-13 03:16:51,245 INFO [master/jenkins-hbase20:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-13 03:16:51,254 INFO [master/jenkins-hbase20:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@3c2b629a{HTTP/1.1, (http/1.1)}{0.0.0.0:46459} 2023-07-13 03:16:51,255 INFO [master/jenkins-hbase20:0:becomeActiveMaster] server.Server(415): Started @46787ms 2023-07-13 03:16:51,255 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase20.apache.org,35861,1689218210690 2023-07-13 03:16:51,256 DEBUG [Listener at localhost.localdomain/45255-EventThread] zookeeper.ZKWatcher(600): master:35861-0x1008454d7cc0000, quorum=127.0.0.1:57116, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-13 03:16:51,256 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:35861-0x1008454d7cc0000, quorum=127.0.0.1:57116, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase20.apache.org,35861,1689218210690 2023-07-13 03:16:51,257 DEBUG [Listener at localhost.localdomain/45255-EventThread] zookeeper.ZKWatcher(600): regionserver:34353-0x1008454d7cc0003, quorum=127.0.0.1:57116, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-13 03:16:51,257 DEBUG [Listener at localhost.localdomain/45255-EventThread] zookeeper.ZKWatcher(600): regionserver:36825-0x1008454d7cc0002, quorum=127.0.0.1:57116, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-13 03:16:51,257 DEBUG [Listener at localhost.localdomain/45255-EventThread] zookeeper.ZKWatcher(600): master:35861-0x1008454d7cc0000, quorum=127.0.0.1:57116, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-13 03:16:51,257 DEBUG [Listener at localhost.localdomain/45255-EventThread] zookeeper.ZKWatcher(600): regionserver:46211-0x1008454d7cc0001, quorum=127.0.0.1:57116, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-13 03:16:51,258 DEBUG [Listener at localhost.localdomain/45255-EventThread] zookeeper.ZKWatcher(600): master:35861-0x1008454d7cc0000, quorum=127.0.0.1:57116, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 03:16:51,261 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:35861-0x1008454d7cc0000, quorum=127.0.0.1:57116, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-13 03:16:51,261 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:35861-0x1008454d7cc0000, quorum=127.0.0.1:57116, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-13 03:16:51,261 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase20.apache.org,35861,1689218210690 from backup master directory 2023-07-13 03:16:51,270 DEBUG [Listener at localhost.localdomain/45255-EventThread] zookeeper.ZKWatcher(600): master:35861-0x1008454d7cc0000, quorum=127.0.0.1:57116, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase20.apache.org,35861,1689218210690 2023-07-13 03:16:51,270 WARN [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-13 03:16:51,270 DEBUG [Listener at localhost.localdomain/45255-EventThread] zookeeper.ZKWatcher(600): master:35861-0x1008454d7cc0000, quorum=127.0.0.1:57116, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-13 03:16:51,270 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase20.apache.org,35861,1689218210690 2023-07-13 03:16:51,289 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/hbase.id with ID: 4d057173-fbac-4b70-a449-849015a43107 2023-07-13 03:16:51,299 INFO [master/jenkins-hbase20:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 03:16:51,301 DEBUG [Listener at localhost.localdomain/45255-EventThread] zookeeper.ZKWatcher(600): master:35861-0x1008454d7cc0000, quorum=127.0.0.1:57116, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 03:16:51,318 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x11e8d79e to 127.0.0.1:57116 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-13 03:16:51,321 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@60d8f61b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-13 03:16:51,322 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-13 03:16:51,322 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-13 03:16:51,322 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-13 03:16:51,324 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/MasterData/data/master/store-tmp 2023-07-13 03:16:51,334 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 03:16:51,335 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-13 03:16:51,335 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-13 03:16:51,335 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-13 03:16:51,335 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-13 03:16:51,335 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-13 03:16:51,335 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-13 03:16:51,335 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-13 03:16:51,335 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/MasterData/WALs/jenkins-hbase20.apache.org,35861,1689218210690 2023-07-13 03:16:51,338 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C35861%2C1689218210690, suffix=, logDir=hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/MasterData/WALs/jenkins-hbase20.apache.org,35861,1689218210690, archiveDir=hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/MasterData/oldWALs, maxLogs=10 2023-07-13 03:16:51,354 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38005,DS-d810f180-5a5e-4d9f-9cc8-02217874441c,DISK] 2023-07-13 03:16:51,362 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37667,DS-fa8e52be-3008-4bea-95b5-c288d99d0c25,DISK] 2023-07-13 03:16:51,373 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34961,DS-0dcdd7d6-4135-450d-a1b0-58a88351bb0c,DISK] 2023-07-13 03:16:51,375 INFO [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/MasterData/WALs/jenkins-hbase20.apache.org,35861,1689218210690/jenkins-hbase20.apache.org%2C35861%2C1689218210690.1689218211338 2023-07-13 03:16:51,375 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38005,DS-d810f180-5a5e-4d9f-9cc8-02217874441c,DISK], DatanodeInfoWithStorage[127.0.0.1:34961,DS-0dcdd7d6-4135-450d-a1b0-58a88351bb0c,DISK], DatanodeInfoWithStorage[127.0.0.1:37667,DS-fa8e52be-3008-4bea-95b5-c288d99d0c25,DISK]] 2023-07-13 03:16:51,375 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-13 03:16:51,375 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 03:16:51,375 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-13 03:16:51,375 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-13 03:16:51,378 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-13 03:16:51,379 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-13 03:16:51,380 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-13 03:16:51,380 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 03:16:51,381 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-13 03:16:51,382 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-13 03:16:51,384 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-13 03:16:51,385 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 03:16:51,386 INFO [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11143595360, jitterRate=0.03782819211483002}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 03:16:51,386 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-13 03:16:51,386 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-13 03:16:51,387 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-13 03:16:51,387 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-13 03:16:51,387 INFO [master/jenkins-hbase20:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-13 03:16:51,388 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-13 03:16:51,388 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-07-13 03:16:51,388 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-13 03:16:51,389 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-13 03:16:51,390 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-13 03:16:51,390 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35861-0x1008454d7cc0000, quorum=127.0.0.1:57116, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-13 03:16:51,391 INFO [master/jenkins-hbase20:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-13 03:16:51,391 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35861-0x1008454d7cc0000, quorum=127.0.0.1:57116, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-13 03:16:51,392 DEBUG [Listener at localhost.localdomain/45255-EventThread] zookeeper.ZKWatcher(600): master:35861-0x1008454d7cc0000, quorum=127.0.0.1:57116, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 03:16:51,392 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35861-0x1008454d7cc0000, quorum=127.0.0.1:57116, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-13 03:16:51,393 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35861-0x1008454d7cc0000, quorum=127.0.0.1:57116, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-13 03:16:51,393 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35861-0x1008454d7cc0000, quorum=127.0.0.1:57116, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-13 03:16:51,394 DEBUG [Listener at localhost.localdomain/45255-EventThread] zookeeper.ZKWatcher(600): regionserver:46211-0x1008454d7cc0001, quorum=127.0.0.1:57116, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-13 03:16:51,394 DEBUG [Listener at localhost.localdomain/45255-EventThread] zookeeper.ZKWatcher(600): regionserver:34353-0x1008454d7cc0003, quorum=127.0.0.1:57116, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-13 03:16:51,394 DEBUG [Listener at localhost.localdomain/45255-EventThread] zookeeper.ZKWatcher(600): regionserver:36825-0x1008454d7cc0002, quorum=127.0.0.1:57116, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-13 03:16:51,394 DEBUG [Listener at localhost.localdomain/45255-EventThread] zookeeper.ZKWatcher(600): master:35861-0x1008454d7cc0000, quorum=127.0.0.1:57116, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-13 03:16:51,394 DEBUG [Listener at localhost.localdomain/45255-EventThread] zookeeper.ZKWatcher(600): master:35861-0x1008454d7cc0000, quorum=127.0.0.1:57116, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 03:16:51,394 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase20.apache.org,35861,1689218210690, sessionid=0x1008454d7cc0000, setting cluster-up flag (Was=false) 2023-07-13 03:16:51,399 DEBUG [Listener at localhost.localdomain/45255-EventThread] zookeeper.ZKWatcher(600): master:35861-0x1008454d7cc0000, quorum=127.0.0.1:57116, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 03:16:51,401 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-13 03:16:51,401 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase20.apache.org,35861,1689218210690 2023-07-13 03:16:51,403 DEBUG [Listener at localhost.localdomain/45255-EventThread] zookeeper.ZKWatcher(600): master:35861-0x1008454d7cc0000, quorum=127.0.0.1:57116, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 03:16:51,405 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-13 03:16:51,406 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase20.apache.org,35861,1689218210690 2023-07-13 03:16:51,406 WARN [master/jenkins-hbase20:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/.hbase-snapshot/.tmp 2023-07-13 03:16:51,407 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-13 03:16:51,407 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-13 03:16:51,407 INFO [master/jenkins-hbase20:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-13 03:16:51,408 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,35861,1689218210690] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-13 03:16:51,408 INFO [master/jenkins-hbase20:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-13 03:16:51,409 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-13 03:16:51,419 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-13 03:16:51,419 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-13 03:16:51,420 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-13 03:16:51,420 INFO [master/jenkins-hbase20:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-13 03:16:51,420 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-07-13 03:16:51,420 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-07-13 03:16:51,420 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-07-13 03:16:51,420 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=5, maxPoolSize=5 2023-07-13 03:16:51,420 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase20:0, corePoolSize=10, maxPoolSize=10 2023-07-13 03:16:51,420 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:51,420 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-07-13 03:16:51,420 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:51,421 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689218241421 2023-07-13 03:16:51,422 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-13 03:16:51,423 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-13 03:16:51,423 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-13 03:16:51,423 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-13 03:16:51,423 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-13 03:16:51,423 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-13 03:16:51,423 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:51,423 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-13 03:16:51,423 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-13 03:16:51,424 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-13 03:16:51,424 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-13 03:16:51,424 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-13 03:16:51,424 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-13 03:16:51,424 INFO [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-13 03:16:51,426 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1689218211425,5,FailOnTimeoutGroup] 2023-07-13 03:16:51,426 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1689218211426,5,FailOnTimeoutGroup] 2023-07-13 03:16:51,426 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:51,427 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-13 03:16:51,427 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-13 03:16:51,427 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:51,427 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:51,438 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-13 03:16:51,438 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-13 03:16:51,438 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8 2023-07-13 03:16:51,444 INFO [RS:1;jenkins-hbase20:36825] regionserver.HRegionServer(951): ClusterId : 4d057173-fbac-4b70-a449-849015a43107 2023-07-13 03:16:51,444 INFO [RS:0;jenkins-hbase20:46211] regionserver.HRegionServer(951): ClusterId : 4d057173-fbac-4b70-a449-849015a43107 2023-07-13 03:16:51,446 INFO [RS:2;jenkins-hbase20:34353] regionserver.HRegionServer(951): ClusterId : 4d057173-fbac-4b70-a449-849015a43107 2023-07-13 03:16:51,446 DEBUG [RS:1;jenkins-hbase20:36825] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-13 03:16:51,449 DEBUG [RS:2;jenkins-hbase20:34353] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-13 03:16:51,447 DEBUG [RS:0;jenkins-hbase20:46211] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-13 03:16:51,450 DEBUG [RS:1;jenkins-hbase20:36825] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-13 03:16:51,450 DEBUG [RS:2;jenkins-hbase20:34353] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-13 03:16:51,450 DEBUG [RS:2;jenkins-hbase20:34353] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-13 03:16:51,450 DEBUG [RS:0;jenkins-hbase20:46211] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-13 03:16:51,451 DEBUG [RS:0;jenkins-hbase20:46211] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-13 03:16:51,450 DEBUG [RS:1;jenkins-hbase20:36825] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-13 03:16:51,452 DEBUG [RS:2;jenkins-hbase20:34353] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-13 03:16:51,452 DEBUG [RS:1;jenkins-hbase20:36825] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-13 03:16:51,454 DEBUG [RS:0;jenkins-hbase20:46211] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-13 03:16:51,454 DEBUG [RS:2;jenkins-hbase20:34353] zookeeper.ReadOnlyZKClient(139): Connect 0x6a629e53 to 127.0.0.1:57116 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-13 03:16:51,458 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 03:16:51,461 DEBUG [RS:0;jenkins-hbase20:46211] zookeeper.ReadOnlyZKClient(139): Connect 0x464877d5 to 127.0.0.1:57116 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-13 03:16:51,461 DEBUG [RS:1;jenkins-hbase20:36825] zookeeper.ReadOnlyZKClient(139): Connect 0x7a98c8bc to 127.0.0.1:57116 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-13 03:16:51,463 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-13 03:16:51,466 DEBUG [RS:2;jenkins-hbase20:34353] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@64a56d6f, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-13 03:16:51,466 DEBUG [RS:2;jenkins-hbase20:34353] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@609af523, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-07-13 03:16:51,467 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/data/hbase/meta/1588230740/info 2023-07-13 03:16:51,468 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-13 03:16:51,468 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 03:16:51,468 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-13 03:16:51,470 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/data/hbase/meta/1588230740/rep_barrier 2023-07-13 03:16:51,470 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-13 03:16:51,471 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 03:16:51,471 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-13 03:16:51,471 DEBUG [RS:0;jenkins-hbase20:46211] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@34ae178, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-13 03:16:51,471 DEBUG [RS:0;jenkins-hbase20:46211] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4fd39080, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-07-13 03:16:51,472 DEBUG [RS:1;jenkins-hbase20:36825] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5e0c8556, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-13 03:16:51,472 DEBUG [RS:1;jenkins-hbase20:36825] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4b28d5ea, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-07-13 03:16:51,472 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/data/hbase/meta/1588230740/table 2023-07-13 03:16:51,472 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-13 03:16:51,473 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 03:16:51,473 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/data/hbase/meta/1588230740 2023-07-13 03:16:51,474 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/data/hbase/meta/1588230740 2023-07-13 03:16:51,476 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-13 03:16:51,476 DEBUG [RS:2;jenkins-hbase20:34353] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase20:34353 2023-07-13 03:16:51,476 INFO [RS:2;jenkins-hbase20:34353] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-13 03:16:51,476 INFO [RS:2;jenkins-hbase20:34353] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-13 03:16:51,476 DEBUG [RS:2;jenkins-hbase20:34353] regionserver.HRegionServer(1022): About to register with Master. 2023-07-13 03:16:51,477 INFO [RS:2;jenkins-hbase20:34353] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase20.apache.org,35861,1689218210690 with isa=jenkins-hbase20.apache.org/148.251.75.209:34353, startcode=1689218211105 2023-07-13 03:16:51,477 DEBUG [RS:2;jenkins-hbase20:34353] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-13 03:16:51,477 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-13 03:16:51,479 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:58383, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.9 (auth:SIMPLE), service=RegionServerStatusService 2023-07-13 03:16:51,480 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=35861] master.ServerManager(394): Registering regionserver=jenkins-hbase20.apache.org,34353,1689218211105 2023-07-13 03:16:51,481 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,35861,1689218210690] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-13 03:16:51,481 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,35861,1689218210690] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-13 03:16:51,481 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 03:16:51,482 DEBUG [RS:2;jenkins-hbase20:34353] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8 2023-07-13 03:16:51,482 DEBUG [RS:2;jenkins-hbase20:34353] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:38103 2023-07-13 03:16:51,482 DEBUG [RS:2;jenkins-hbase20:34353] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=38563 2023-07-13 03:16:51,482 DEBUG [RS:0;jenkins-hbase20:46211] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase20:46211 2023-07-13 03:16:51,482 INFO [RS:0;jenkins-hbase20:46211] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-13 03:16:51,482 INFO [RS:0;jenkins-hbase20:46211] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-13 03:16:51,482 DEBUG [RS:0;jenkins-hbase20:46211] regionserver.HRegionServer(1022): About to register with Master. 2023-07-13 03:16:51,482 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11167308480, jitterRate=0.0400366485118866}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-13 03:16:51,482 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-13 03:16:51,482 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-13 03:16:51,482 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-13 03:16:51,482 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-13 03:16:51,482 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-13 03:16:51,482 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-13 03:16:51,483 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-13 03:16:51,483 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-13 03:16:51,483 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-13 03:16:51,483 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-13 03:16:51,483 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-13 03:16:51,484 DEBUG [RS:1;jenkins-hbase20:36825] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase20:36825 2023-07-13 03:16:51,484 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-13 03:16:51,484 INFO [RS:1;jenkins-hbase20:36825] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-13 03:16:51,484 INFO [RS:1;jenkins-hbase20:36825] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-13 03:16:51,485 DEBUG [RS:1;jenkins-hbase20:36825] regionserver.HRegionServer(1022): About to register with Master. 2023-07-13 03:16:51,485 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-13 03:16:51,486 DEBUG [Listener at localhost.localdomain/45255-EventThread] zookeeper.ZKWatcher(600): master:35861-0x1008454d7cc0000, quorum=127.0.0.1:57116, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 03:16:51,487 INFO [RS:1;jenkins-hbase20:36825] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase20.apache.org,35861,1689218210690 with isa=jenkins-hbase20.apache.org/148.251.75.209:36825, startcode=1689218210969 2023-07-13 03:16:51,487 INFO [RS:0;jenkins-hbase20:46211] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase20.apache.org,35861,1689218210690 with isa=jenkins-hbase20.apache.org/148.251.75.209:46211, startcode=1689218210846 2023-07-13 03:16:51,488 DEBUG [RS:1;jenkins-hbase20:36825] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-13 03:16:51,488 DEBUG [RS:2;jenkins-hbase20:34353] zookeeper.ZKUtil(162): regionserver:34353-0x1008454d7cc0003, quorum=127.0.0.1:57116, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,34353,1689218211105 2023-07-13 03:16:51,488 DEBUG [RS:0;jenkins-hbase20:46211] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-13 03:16:51,488 WARN [RS:2;jenkins-hbase20:34353] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-13 03:16:51,488 INFO [RS:2;jenkins-hbase20:34353] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-13 03:16:51,488 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase20.apache.org,34353,1689218211105] 2023-07-13 03:16:51,488 DEBUG [RS:2;jenkins-hbase20:34353] regionserver.HRegionServer(1948): logDir=hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/WALs/jenkins-hbase20.apache.org,34353,1689218211105 2023-07-13 03:16:51,491 INFO [RS-EventLoopGroup-12-1] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:60229, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.8 (auth:SIMPLE), service=RegionServerStatusService 2023-07-13 03:16:51,491 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:55855, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.7 (auth:SIMPLE), service=RegionServerStatusService 2023-07-13 03:16:51,491 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=35861] master.ServerManager(394): Registering regionserver=jenkins-hbase20.apache.org,36825,1689218210969 2023-07-13 03:16:51,491 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,35861,1689218210690] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-13 03:16:51,491 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,35861,1689218210690] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-13 03:16:51,491 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=35861] master.ServerManager(394): Registering regionserver=jenkins-hbase20.apache.org,46211,1689218210846 2023-07-13 03:16:51,491 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,35861,1689218210690] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-13 03:16:51,491 DEBUG [RS:1;jenkins-hbase20:36825] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8 2023-07-13 03:16:51,491 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,35861,1689218210690] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-13 03:16:51,491 DEBUG [RS:1;jenkins-hbase20:36825] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:38103 2023-07-13 03:16:51,491 DEBUG [RS:1;jenkins-hbase20:36825] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=38563 2023-07-13 03:16:51,492 DEBUG [RS:0;jenkins-hbase20:46211] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8 2023-07-13 03:16:51,492 DEBUG [RS:0;jenkins-hbase20:46211] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:38103 2023-07-13 03:16:51,492 DEBUG [RS:0;jenkins-hbase20:46211] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=38563 2023-07-13 03:16:51,497 DEBUG [Listener at localhost.localdomain/45255-EventThread] zookeeper.ZKWatcher(600): master:35861-0x1008454d7cc0000, quorum=127.0.0.1:57116, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 03:16:51,498 DEBUG [RS:1;jenkins-hbase20:36825] zookeeper.ZKUtil(162): regionserver:36825-0x1008454d7cc0002, quorum=127.0.0.1:57116, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,36825,1689218210969 2023-07-13 03:16:51,498 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase20.apache.org,36825,1689218210969] 2023-07-13 03:16:51,498 DEBUG [RS:0;jenkins-hbase20:46211] zookeeper.ZKUtil(162): regionserver:46211-0x1008454d7cc0001, quorum=127.0.0.1:57116, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,46211,1689218210846 2023-07-13 03:16:51,498 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase20.apache.org,46211,1689218210846] 2023-07-13 03:16:51,498 WARN [RS:1;jenkins-hbase20:36825] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-13 03:16:51,498 WARN [RS:0;jenkins-hbase20:46211] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-13 03:16:51,498 INFO [RS:1;jenkins-hbase20:36825] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-13 03:16:51,498 INFO [RS:0;jenkins-hbase20:46211] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-13 03:16:51,498 DEBUG [RS:2;jenkins-hbase20:34353] zookeeper.ZKUtil(162): regionserver:34353-0x1008454d7cc0003, quorum=127.0.0.1:57116, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,36825,1689218210969 2023-07-13 03:16:51,499 DEBUG [RS:2;jenkins-hbase20:34353] zookeeper.ZKUtil(162): regionserver:34353-0x1008454d7cc0003, quorum=127.0.0.1:57116, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,34353,1689218211105 2023-07-13 03:16:51,499 DEBUG [RS:0;jenkins-hbase20:46211] regionserver.HRegionServer(1948): logDir=hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/WALs/jenkins-hbase20.apache.org,46211,1689218210846 2023-07-13 03:16:51,499 DEBUG [RS:1;jenkins-hbase20:36825] regionserver.HRegionServer(1948): logDir=hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/WALs/jenkins-hbase20.apache.org,36825,1689218210969 2023-07-13 03:16:51,499 DEBUG [RS:2;jenkins-hbase20:34353] zookeeper.ZKUtil(162): regionserver:34353-0x1008454d7cc0003, quorum=127.0.0.1:57116, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,46211,1689218210846 2023-07-13 03:16:51,500 DEBUG [RS:2;jenkins-hbase20:34353] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-13 03:16:51,500 INFO [RS:2;jenkins-hbase20:34353] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-13 03:16:51,508 INFO [RS:2;jenkins-hbase20:34353] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-13 03:16:51,511 INFO [RS:2;jenkins-hbase20:34353] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-13 03:16:51,511 INFO [RS:2;jenkins-hbase20:34353] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:51,512 INFO [RS:2;jenkins-hbase20:34353] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-13 03:16:51,515 INFO [RS:2;jenkins-hbase20:34353] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:51,515 DEBUG [RS:1;jenkins-hbase20:36825] zookeeper.ZKUtil(162): regionserver:36825-0x1008454d7cc0002, quorum=127.0.0.1:57116, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,36825,1689218210969 2023-07-13 03:16:51,516 DEBUG [RS:2;jenkins-hbase20:34353] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:51,516 DEBUG [RS:2;jenkins-hbase20:34353] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:51,516 DEBUG [RS:2;jenkins-hbase20:34353] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:51,516 DEBUG [RS:0;jenkins-hbase20:46211] zookeeper.ZKUtil(162): regionserver:46211-0x1008454d7cc0001, quorum=127.0.0.1:57116, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,36825,1689218210969 2023-07-13 03:16:51,516 DEBUG [RS:1;jenkins-hbase20:36825] zookeeper.ZKUtil(162): regionserver:36825-0x1008454d7cc0002, quorum=127.0.0.1:57116, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,34353,1689218211105 2023-07-13 03:16:51,516 DEBUG [RS:2;jenkins-hbase20:34353] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:51,516 DEBUG [RS:2;jenkins-hbase20:34353] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:51,516 DEBUG [RS:2;jenkins-hbase20:34353] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-07-13 03:16:51,516 DEBUG [RS:2;jenkins-hbase20:34353] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:51,516 DEBUG [RS:2;jenkins-hbase20:34353] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:51,516 DEBUG [RS:2;jenkins-hbase20:34353] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:51,517 DEBUG [RS:2;jenkins-hbase20:34353] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:51,517 DEBUG [RS:1;jenkins-hbase20:36825] zookeeper.ZKUtil(162): regionserver:36825-0x1008454d7cc0002, quorum=127.0.0.1:57116, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,46211,1689218210846 2023-07-13 03:16:51,517 DEBUG [RS:0;jenkins-hbase20:46211] zookeeper.ZKUtil(162): regionserver:46211-0x1008454d7cc0001, quorum=127.0.0.1:57116, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,34353,1689218211105 2023-07-13 03:16:51,518 INFO [RS:2;jenkins-hbase20:34353] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:51,518 DEBUG [RS:0;jenkins-hbase20:46211] zookeeper.ZKUtil(162): regionserver:46211-0x1008454d7cc0001, quorum=127.0.0.1:57116, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,46211,1689218210846 2023-07-13 03:16:51,518 INFO [RS:2;jenkins-hbase20:34353] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:51,518 INFO [RS:2;jenkins-hbase20:34353] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:51,518 DEBUG [RS:1;jenkins-hbase20:36825] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-13 03:16:51,518 INFO [RS:1;jenkins-hbase20:36825] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-13 03:16:51,519 DEBUG [RS:0;jenkins-hbase20:46211] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-13 03:16:51,520 INFO [RS:0;jenkins-hbase20:46211] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-13 03:16:51,522 INFO [RS:1;jenkins-hbase20:36825] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-13 03:16:51,523 INFO [RS:0;jenkins-hbase20:46211] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-13 03:16:51,523 INFO [RS:1;jenkins-hbase20:36825] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-13 03:16:51,523 INFO [RS:1;jenkins-hbase20:36825] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:51,523 INFO [RS:0;jenkins-hbase20:46211] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-13 03:16:51,523 INFO [RS:0;jenkins-hbase20:46211] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:51,523 INFO [RS:1;jenkins-hbase20:36825] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-13 03:16:51,523 INFO [RS:0;jenkins-hbase20:46211] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-13 03:16:51,524 INFO [RS:0;jenkins-hbase20:46211] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:51,524 DEBUG [RS:0;jenkins-hbase20:46211] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:51,524 DEBUG [RS:0;jenkins-hbase20:46211] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:51,524 DEBUG [RS:0;jenkins-hbase20:46211] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:51,524 DEBUG [RS:0;jenkins-hbase20:46211] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:51,524 DEBUG [RS:0;jenkins-hbase20:46211] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:51,524 DEBUG [RS:0;jenkins-hbase20:46211] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-07-13 03:16:51,524 DEBUG [RS:0;jenkins-hbase20:46211] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:51,524 DEBUG [RS:0;jenkins-hbase20:46211] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:51,524 DEBUG [RS:0;jenkins-hbase20:46211] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:51,525 DEBUG [RS:0;jenkins-hbase20:46211] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:51,527 INFO [RS:1;jenkins-hbase20:36825] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:51,530 INFO [RS:0;jenkins-hbase20:46211] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:51,531 DEBUG [RS:1;jenkins-hbase20:36825] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:51,531 INFO [RS:0;jenkins-hbase20:46211] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:51,532 DEBUG [RS:1;jenkins-hbase20:36825] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:51,532 INFO [RS:0;jenkins-hbase20:46211] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:51,532 DEBUG [RS:1;jenkins-hbase20:36825] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:51,532 DEBUG [RS:1;jenkins-hbase20:36825] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:51,532 DEBUG [RS:1;jenkins-hbase20:36825] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:51,532 DEBUG [RS:1;jenkins-hbase20:36825] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-07-13 03:16:51,532 DEBUG [RS:1;jenkins-hbase20:36825] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:51,532 DEBUG [RS:1;jenkins-hbase20:36825] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:51,532 DEBUG [RS:1;jenkins-hbase20:36825] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:51,532 DEBUG [RS:1;jenkins-hbase20:36825] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:51,535 INFO [RS:2;jenkins-hbase20:34353] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-13 03:16:51,536 INFO [RS:2;jenkins-hbase20:34353] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,34353,1689218211105-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:51,536 INFO [RS:1;jenkins-hbase20:36825] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:51,537 INFO [RS:1;jenkins-hbase20:36825] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:51,537 INFO [RS:1;jenkins-hbase20:36825] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:51,544 INFO [RS:0;jenkins-hbase20:46211] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-13 03:16:51,544 INFO [RS:0;jenkins-hbase20:46211] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,46211,1689218210846-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:51,545 INFO [RS:2;jenkins-hbase20:34353] regionserver.Replication(203): jenkins-hbase20.apache.org,34353,1689218211105 started 2023-07-13 03:16:51,545 INFO [RS:2;jenkins-hbase20:34353] regionserver.HRegionServer(1637): Serving as jenkins-hbase20.apache.org,34353,1689218211105, RpcServer on jenkins-hbase20.apache.org/148.251.75.209:34353, sessionid=0x1008454d7cc0003 2023-07-13 03:16:51,545 DEBUG [RS:2;jenkins-hbase20:34353] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-13 03:16:51,545 DEBUG [RS:2;jenkins-hbase20:34353] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase20.apache.org,34353,1689218211105 2023-07-13 03:16:51,545 DEBUG [RS:2;jenkins-hbase20:34353] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,34353,1689218211105' 2023-07-13 03:16:51,546 DEBUG [RS:2;jenkins-hbase20:34353] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-13 03:16:51,546 DEBUG [RS:2;jenkins-hbase20:34353] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-13 03:16:51,546 DEBUG [RS:2;jenkins-hbase20:34353] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-13 03:16:51,546 DEBUG [RS:2;jenkins-hbase20:34353] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-13 03:16:51,546 DEBUG [RS:2;jenkins-hbase20:34353] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase20.apache.org,34353,1689218211105 2023-07-13 03:16:51,546 DEBUG [RS:2;jenkins-hbase20:34353] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,34353,1689218211105' 2023-07-13 03:16:51,546 DEBUG [RS:2;jenkins-hbase20:34353] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-13 03:16:51,547 DEBUG [RS:2;jenkins-hbase20:34353] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-13 03:16:51,547 DEBUG [RS:2;jenkins-hbase20:34353] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-13 03:16:51,547 INFO [RS:2;jenkins-hbase20:34353] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-13 03:16:51,547 INFO [RS:2;jenkins-hbase20:34353] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-13 03:16:51,550 INFO [RS:1;jenkins-hbase20:36825] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-13 03:16:51,550 INFO [RS:1;jenkins-hbase20:36825] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,36825,1689218210969-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:51,554 INFO [RS:0;jenkins-hbase20:46211] regionserver.Replication(203): jenkins-hbase20.apache.org,46211,1689218210846 started 2023-07-13 03:16:51,554 INFO [RS:0;jenkins-hbase20:46211] regionserver.HRegionServer(1637): Serving as jenkins-hbase20.apache.org,46211,1689218210846, RpcServer on jenkins-hbase20.apache.org/148.251.75.209:46211, sessionid=0x1008454d7cc0001 2023-07-13 03:16:51,554 DEBUG [RS:0;jenkins-hbase20:46211] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-13 03:16:51,554 DEBUG [RS:0;jenkins-hbase20:46211] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase20.apache.org,46211,1689218210846 2023-07-13 03:16:51,554 DEBUG [RS:0;jenkins-hbase20:46211] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,46211,1689218210846' 2023-07-13 03:16:51,554 DEBUG [RS:0;jenkins-hbase20:46211] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-13 03:16:51,555 DEBUG [RS:0;jenkins-hbase20:46211] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-13 03:16:51,555 DEBUG [RS:0;jenkins-hbase20:46211] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-13 03:16:51,555 DEBUG [RS:0;jenkins-hbase20:46211] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-13 03:16:51,555 DEBUG [RS:0;jenkins-hbase20:46211] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase20.apache.org,46211,1689218210846 2023-07-13 03:16:51,555 DEBUG [RS:0;jenkins-hbase20:46211] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,46211,1689218210846' 2023-07-13 03:16:51,555 DEBUG [RS:0;jenkins-hbase20:46211] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-13 03:16:51,555 DEBUG [RS:0;jenkins-hbase20:46211] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-13 03:16:51,556 DEBUG [RS:0;jenkins-hbase20:46211] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-13 03:16:51,556 INFO [RS:0;jenkins-hbase20:46211] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-13 03:16:51,556 INFO [RS:0;jenkins-hbase20:46211] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-13 03:16:51,565 INFO [RS:1;jenkins-hbase20:36825] regionserver.Replication(203): jenkins-hbase20.apache.org,36825,1689218210969 started 2023-07-13 03:16:51,565 INFO [RS:1;jenkins-hbase20:36825] regionserver.HRegionServer(1637): Serving as jenkins-hbase20.apache.org,36825,1689218210969, RpcServer on jenkins-hbase20.apache.org/148.251.75.209:36825, sessionid=0x1008454d7cc0002 2023-07-13 03:16:51,565 DEBUG [RS:1;jenkins-hbase20:36825] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-13 03:16:51,565 DEBUG [RS:1;jenkins-hbase20:36825] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase20.apache.org,36825,1689218210969 2023-07-13 03:16:51,565 DEBUG [RS:1;jenkins-hbase20:36825] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,36825,1689218210969' 2023-07-13 03:16:51,565 DEBUG [RS:1;jenkins-hbase20:36825] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-13 03:16:51,566 DEBUG [RS:1;jenkins-hbase20:36825] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-13 03:16:51,566 DEBUG [RS:1;jenkins-hbase20:36825] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-13 03:16:51,566 DEBUG [RS:1;jenkins-hbase20:36825] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-13 03:16:51,566 DEBUG [RS:1;jenkins-hbase20:36825] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase20.apache.org,36825,1689218210969 2023-07-13 03:16:51,566 DEBUG [RS:1;jenkins-hbase20:36825] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,36825,1689218210969' 2023-07-13 03:16:51,566 DEBUG [RS:1;jenkins-hbase20:36825] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-13 03:16:51,566 DEBUG [RS:1;jenkins-hbase20:36825] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-13 03:16:51,567 DEBUG [RS:1;jenkins-hbase20:36825] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-13 03:16:51,567 INFO [RS:1;jenkins-hbase20:36825] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-13 03:16:51,567 INFO [RS:1;jenkins-hbase20:36825] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-13 03:16:51,636 DEBUG [jenkins-hbase20:35861] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-13 03:16:51,636 DEBUG [jenkins-hbase20:35861] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-13 03:16:51,636 DEBUG [jenkins-hbase20:35861] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-13 03:16:51,636 DEBUG [jenkins-hbase20:35861] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-13 03:16:51,636 DEBUG [jenkins-hbase20:35861] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-13 03:16:51,636 DEBUG [jenkins-hbase20:35861] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-13 03:16:51,637 INFO [PEWorker-4] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,34353,1689218211105, state=OPENING 2023-07-13 03:16:51,638 DEBUG [PEWorker-4] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-13 03:16:51,639 DEBUG [Listener at localhost.localdomain/45255-EventThread] zookeeper.ZKWatcher(600): master:35861-0x1008454d7cc0000, quorum=127.0.0.1:57116, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 03:16:51,639 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-13 03:16:51,639 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,34353,1689218211105}] 2023-07-13 03:16:51,649 INFO [RS:2;jenkins-hbase20:34353] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C34353%2C1689218211105, suffix=, logDir=hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/WALs/jenkins-hbase20.apache.org,34353,1689218211105, archiveDir=hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/oldWALs, maxLogs=32 2023-07-13 03:16:51,658 INFO [RS:0;jenkins-hbase20:46211] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C46211%2C1689218210846, suffix=, logDir=hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/WALs/jenkins-hbase20.apache.org,46211,1689218210846, archiveDir=hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/oldWALs, maxLogs=32 2023-07-13 03:16:51,664 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38005,DS-d810f180-5a5e-4d9f-9cc8-02217874441c,DISK] 2023-07-13 03:16:51,664 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34961,DS-0dcdd7d6-4135-450d-a1b0-58a88351bb0c,DISK] 2023-07-13 03:16:51,668 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37667,DS-fa8e52be-3008-4bea-95b5-c288d99d0c25,DISK] 2023-07-13 03:16:51,669 INFO [RS:1;jenkins-hbase20:36825] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C36825%2C1689218210969, suffix=, logDir=hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/WALs/jenkins-hbase20.apache.org,36825,1689218210969, archiveDir=hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/oldWALs, maxLogs=32 2023-07-13 03:16:51,672 INFO [RS:2;jenkins-hbase20:34353] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/WALs/jenkins-hbase20.apache.org,34353,1689218211105/jenkins-hbase20.apache.org%2C34353%2C1689218211105.1689218211649 2023-07-13 03:16:51,672 DEBUG [RS:2;jenkins-hbase20:34353] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38005,DS-d810f180-5a5e-4d9f-9cc8-02217874441c,DISK], DatanodeInfoWithStorage[127.0.0.1:34961,DS-0dcdd7d6-4135-450d-a1b0-58a88351bb0c,DISK], DatanodeInfoWithStorage[127.0.0.1:37667,DS-fa8e52be-3008-4bea-95b5-c288d99d0c25,DISK]] 2023-07-13 03:16:51,681 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38005,DS-d810f180-5a5e-4d9f-9cc8-02217874441c,DISK] 2023-07-13 03:16:51,681 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37667,DS-fa8e52be-3008-4bea-95b5-c288d99d0c25,DISK] 2023-07-13 03:16:51,681 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34961,DS-0dcdd7d6-4135-450d-a1b0-58a88351bb0c,DISK] 2023-07-13 03:16:51,692 INFO [RS:0;jenkins-hbase20:46211] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/WALs/jenkins-hbase20.apache.org,46211,1689218210846/jenkins-hbase20.apache.org%2C46211%2C1689218210846.1689218211659 2023-07-13 03:16:51,692 DEBUG [RS:0;jenkins-hbase20:46211] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38005,DS-d810f180-5a5e-4d9f-9cc8-02217874441c,DISK], DatanodeInfoWithStorage[127.0.0.1:34961,DS-0dcdd7d6-4135-450d-a1b0-58a88351bb0c,DISK], DatanodeInfoWithStorage[127.0.0.1:37667,DS-fa8e52be-3008-4bea-95b5-c288d99d0c25,DISK]] 2023-07-13 03:16:51,699 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38005,DS-d810f180-5a5e-4d9f-9cc8-02217874441c,DISK] 2023-07-13 03:16:51,699 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37667,DS-fa8e52be-3008-4bea-95b5-c288d99d0c25,DISK] 2023-07-13 03:16:51,699 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34961,DS-0dcdd7d6-4135-450d-a1b0-58a88351bb0c,DISK] 2023-07-13 03:16:51,701 INFO [RS:1;jenkins-hbase20:36825] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/WALs/jenkins-hbase20.apache.org,36825,1689218210969/jenkins-hbase20.apache.org%2C36825%2C1689218210969.1689218211669 2023-07-13 03:16:51,702 DEBUG [RS:1;jenkins-hbase20:36825] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38005,DS-d810f180-5a5e-4d9f-9cc8-02217874441c,DISK], DatanodeInfoWithStorage[127.0.0.1:37667,DS-fa8e52be-3008-4bea-95b5-c288d99d0c25,DISK], DatanodeInfoWithStorage[127.0.0.1:34961,DS-0dcdd7d6-4135-450d-a1b0-58a88351bb0c,DISK]] 2023-07-13 03:16:51,715 WARN [ReadOnlyZKClient-127.0.0.1:57116@0x11e8d79e] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-13 03:16:51,715 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase20.apache.org,35861,1689218210690] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-13 03:16:51,716 INFO [RS-EventLoopGroup-15-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:42892, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-13 03:16:51,716 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=34353] ipc.CallRunner(144): callId: 0 service: ClientService methodName: Get size: 88 connection: 148.251.75.209:42892 deadline: 1689218271716, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase20.apache.org,34353,1689218211105 2023-07-13 03:16:51,794 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase20.apache.org,34353,1689218211105 2023-07-13 03:16:51,796 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-13 03:16:51,797 INFO [RS-EventLoopGroup-15-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:42900, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-13 03:16:51,801 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-13 03:16:51,801 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-13 03:16:51,803 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C34353%2C1689218211105.meta, suffix=.meta, logDir=hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/WALs/jenkins-hbase20.apache.org,34353,1689218211105, archiveDir=hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/oldWALs, maxLogs=32 2023-07-13 03:16:51,815 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34961,DS-0dcdd7d6-4135-450d-a1b0-58a88351bb0c,DISK] 2023-07-13 03:16:51,816 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38005,DS-d810f180-5a5e-4d9f-9cc8-02217874441c,DISK] 2023-07-13 03:16:51,815 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37667,DS-fa8e52be-3008-4bea-95b5-c288d99d0c25,DISK] 2023-07-13 03:16:51,818 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/WALs/jenkins-hbase20.apache.org,34353,1689218211105/jenkins-hbase20.apache.org%2C34353%2C1689218211105.meta.1689218211803.meta 2023-07-13 03:16:51,818 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37667,DS-fa8e52be-3008-4bea-95b5-c288d99d0c25,DISK], DatanodeInfoWithStorage[127.0.0.1:34961,DS-0dcdd7d6-4135-450d-a1b0-58a88351bb0c,DISK], DatanodeInfoWithStorage[127.0.0.1:38005,DS-d810f180-5a5e-4d9f-9cc8-02217874441c,DISK]] 2023-07-13 03:16:51,818 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-13 03:16:51,818 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-13 03:16:51,818 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-13 03:16:51,818 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-13 03:16:51,818 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-13 03:16:51,818 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 03:16:51,819 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-13 03:16:51,819 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-13 03:16:51,820 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-13 03:16:51,821 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/data/hbase/meta/1588230740/info 2023-07-13 03:16:51,821 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/data/hbase/meta/1588230740/info 2023-07-13 03:16:51,821 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-13 03:16:51,821 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 03:16:51,821 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-13 03:16:51,822 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/data/hbase/meta/1588230740/rep_barrier 2023-07-13 03:16:51,822 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/data/hbase/meta/1588230740/rep_barrier 2023-07-13 03:16:51,822 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-13 03:16:51,823 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 03:16:51,823 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-13 03:16:51,823 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/data/hbase/meta/1588230740/table 2023-07-13 03:16:51,824 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/data/hbase/meta/1588230740/table 2023-07-13 03:16:51,824 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-13 03:16:51,824 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 03:16:51,825 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/data/hbase/meta/1588230740 2023-07-13 03:16:51,826 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/data/hbase/meta/1588230740 2023-07-13 03:16:51,828 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-13 03:16:51,829 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-13 03:16:51,829 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10992533440, jitterRate=0.023759454488754272}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-13 03:16:51,829 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-13 03:16:51,830 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689218211794 2023-07-13 03:16:51,836 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-13 03:16:51,837 INFO [RS_OPEN_META-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-13 03:16:51,838 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase20.apache.org,34353,1689218211105, state=OPEN 2023-07-13 03:16:51,839 DEBUG [Listener at localhost.localdomain/45255-EventThread] zookeeper.ZKWatcher(600): master:35861-0x1008454d7cc0000, quorum=127.0.0.1:57116, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-13 03:16:51,839 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-13 03:16:51,840 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-13 03:16:51,840 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase20.apache.org,34353,1689218211105 in 200 msec 2023-07-13 03:16:51,841 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-13 03:16:51,841 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 357 msec 2023-07-13 03:16:51,842 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 434 msec 2023-07-13 03:16:51,843 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689218211843, completionTime=-1 2023-07-13 03:16:51,843 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-13 03:16:51,843 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-13 03:16:51,856 INFO [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-13 03:16:51,856 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689218271856 2023-07-13 03:16:51,857 INFO [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689218331857 2023-07-13 03:16:51,857 INFO [master/jenkins-hbase20:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 13 msec 2023-07-13 03:16:51,862 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,35861,1689218210690-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:51,863 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,35861,1689218210690-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:51,863 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,35861,1689218210690-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:51,863 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase20:35861, period=300000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:51,863 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:51,863 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-13 03:16:51,863 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-13 03:16:51,865 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-13 03:16:51,876 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-13 03:16:51,876 DEBUG [master/jenkins-hbase20:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-13 03:16:51,877 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-13 03:16:51,878 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/.tmp/data/hbase/namespace/c07710fa1db4342318e9b1de545988c8 2023-07-13 03:16:51,879 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/.tmp/data/hbase/namespace/c07710fa1db4342318e9b1de545988c8 empty. 2023-07-13 03:16:51,879 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/.tmp/data/hbase/namespace/c07710fa1db4342318e9b1de545988c8 2023-07-13 03:16:51,879 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-13 03:16:51,911 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-13 03:16:51,913 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => c07710fa1db4342318e9b1de545988c8, NAME => 'hbase:namespace,,1689218211863.c07710fa1db4342318e9b1de545988c8.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/.tmp 2023-07-13 03:16:51,933 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689218211863.c07710fa1db4342318e9b1de545988c8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 03:16:51,933 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing c07710fa1db4342318e9b1de545988c8, disabling compactions & flushes 2023-07-13 03:16:51,933 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689218211863.c07710fa1db4342318e9b1de545988c8. 2023-07-13 03:16:51,933 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689218211863.c07710fa1db4342318e9b1de545988c8. 2023-07-13 03:16:51,933 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689218211863.c07710fa1db4342318e9b1de545988c8. after waiting 0 ms 2023-07-13 03:16:51,933 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689218211863.c07710fa1db4342318e9b1de545988c8. 2023-07-13 03:16:51,933 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689218211863.c07710fa1db4342318e9b1de545988c8. 2023-07-13 03:16:51,934 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for c07710fa1db4342318e9b1de545988c8: 2023-07-13 03:16:51,936 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-13 03:16:51,936 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689218211863.c07710fa1db4342318e9b1de545988c8.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689218211936"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689218211936"}]},"ts":"1689218211936"} 2023-07-13 03:16:51,939 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-13 03:16:51,939 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-13 03:16:51,940 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689218211940"}]},"ts":"1689218211940"} 2023-07-13 03:16:51,941 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-13 03:16:51,942 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-13 03:16:51,943 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-13 03:16:51,943 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-13 03:16:51,943 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-13 03:16:51,943 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-13 03:16:51,943 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=c07710fa1db4342318e9b1de545988c8, ASSIGN}] 2023-07-13 03:16:51,945 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=c07710fa1db4342318e9b1de545988c8, ASSIGN 2023-07-13 03:16:51,946 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=c07710fa1db4342318e9b1de545988c8, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,36825,1689218210969; forceNewPlan=false, retain=false 2023-07-13 03:16:52,020 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase20.apache.org,35861,1689218210690] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-13 03:16:52,024 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase20.apache.org,35861,1689218210690] procedure2.ProcedureExecutor(1029): Stored pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-13 03:16:52,026 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-13 03:16:52,026 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-13 03:16:52,028 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/.tmp/data/hbase/rsgroup/c7783c875957060e5428a4304c2bb71d 2023-07-13 03:16:52,028 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/.tmp/data/hbase/rsgroup/c7783c875957060e5428a4304c2bb71d empty. 2023-07-13 03:16:52,029 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/.tmp/data/hbase/rsgroup/c7783c875957060e5428a4304c2bb71d 2023-07-13 03:16:52,029 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-13 03:16:52,061 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-13 03:16:52,062 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => c7783c875957060e5428a4304c2bb71d, NAME => 'hbase:rsgroup,,1689218212020.c7783c875957060e5428a4304c2bb71d.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/.tmp 2023-07-13 03:16:52,074 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689218212020.c7783c875957060e5428a4304c2bb71d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 03:16:52,074 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing c7783c875957060e5428a4304c2bb71d, disabling compactions & flushes 2023-07-13 03:16:52,074 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689218212020.c7783c875957060e5428a4304c2bb71d. 2023-07-13 03:16:52,074 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689218212020.c7783c875957060e5428a4304c2bb71d. 2023-07-13 03:16:52,074 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689218212020.c7783c875957060e5428a4304c2bb71d. after waiting 0 ms 2023-07-13 03:16:52,074 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689218212020.c7783c875957060e5428a4304c2bb71d. 2023-07-13 03:16:52,074 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689218212020.c7783c875957060e5428a4304c2bb71d. 2023-07-13 03:16:52,074 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for c7783c875957060e5428a4304c2bb71d: 2023-07-13 03:16:52,076 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-13 03:16:52,077 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689218212020.c7783c875957060e5428a4304c2bb71d.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689218212077"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689218212077"}]},"ts":"1689218212077"} 2023-07-13 03:16:52,078 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-13 03:16:52,078 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-13 03:16:52,079 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689218212078"}]},"ts":"1689218212078"} 2023-07-13 03:16:52,080 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-13 03:16:52,081 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-13 03:16:52,082 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-13 03:16:52,082 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-13 03:16:52,082 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-13 03:16:52,082 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-13 03:16:52,082 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=c7783c875957060e5428a4304c2bb71d, ASSIGN}] 2023-07-13 03:16:52,083 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=c7783c875957060e5428a4304c2bb71d, ASSIGN 2023-07-13 03:16:52,083 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=c7783c875957060e5428a4304c2bb71d, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,34353,1689218211105; forceNewPlan=false, retain=false 2023-07-13 03:16:52,083 INFO [jenkins-hbase20:35861] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-13 03:16:52,085 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=c07710fa1db4342318e9b1de545988c8, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,36825,1689218210969 2023-07-13 03:16:52,085 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689218211863.c07710fa1db4342318e9b1de545988c8.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689218212085"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689218212085"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689218212085"}]},"ts":"1689218212085"} 2023-07-13 03:16:52,086 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=c7783c875957060e5428a4304c2bb71d, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,34353,1689218211105 2023-07-13 03:16:52,086 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689218212020.c7783c875957060e5428a4304c2bb71d.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689218212086"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689218212086"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689218212086"}]},"ts":"1689218212086"} 2023-07-13 03:16:52,087 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=5, state=RUNNABLE; OpenRegionProcedure c07710fa1db4342318e9b1de545988c8, server=jenkins-hbase20.apache.org,36825,1689218210969}] 2023-07-13 03:16:52,087 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=7, state=RUNNABLE; OpenRegionProcedure c7783c875957060e5428a4304c2bb71d, server=jenkins-hbase20.apache.org,34353,1689218211105}] 2023-07-13 03:16:52,239 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase20.apache.org,36825,1689218210969 2023-07-13 03:16:52,239 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-13 03:16:52,241 INFO [RS-EventLoopGroup-14-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:43472, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-13 03:16:52,243 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689218212020.c7783c875957060e5428a4304c2bb71d. 2023-07-13 03:16:52,243 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => c7783c875957060e5428a4304c2bb71d, NAME => 'hbase:rsgroup,,1689218212020.c7783c875957060e5428a4304c2bb71d.', STARTKEY => '', ENDKEY => ''} 2023-07-13 03:16:52,243 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-13 03:16:52,244 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689218212020.c7783c875957060e5428a4304c2bb71d. service=MultiRowMutationService 2023-07-13 03:16:52,244 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-13 03:16:52,244 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup c7783c875957060e5428a4304c2bb71d 2023-07-13 03:16:52,244 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689218212020.c7783c875957060e5428a4304c2bb71d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 03:16:52,244 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for c7783c875957060e5428a4304c2bb71d 2023-07-13 03:16:52,244 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for c7783c875957060e5428a4304c2bb71d 2023-07-13 03:16:52,244 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689218211863.c07710fa1db4342318e9b1de545988c8. 2023-07-13 03:16:52,244 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => c07710fa1db4342318e9b1de545988c8, NAME => 'hbase:namespace,,1689218211863.c07710fa1db4342318e9b1de545988c8.', STARTKEY => '', ENDKEY => ''} 2023-07-13 03:16:52,245 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace c07710fa1db4342318e9b1de545988c8 2023-07-13 03:16:52,245 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689218211863.c07710fa1db4342318e9b1de545988c8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 03:16:52,245 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for c07710fa1db4342318e9b1de545988c8 2023-07-13 03:16:52,245 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for c07710fa1db4342318e9b1de545988c8 2023-07-13 03:16:52,245 INFO [StoreOpener-c7783c875957060e5428a4304c2bb71d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region c7783c875957060e5428a4304c2bb71d 2023-07-13 03:16:52,246 INFO [StoreOpener-c07710fa1db4342318e9b1de545988c8-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region c07710fa1db4342318e9b1de545988c8 2023-07-13 03:16:52,247 DEBUG [StoreOpener-c7783c875957060e5428a4304c2bb71d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/data/hbase/rsgroup/c7783c875957060e5428a4304c2bb71d/m 2023-07-13 03:16:52,247 DEBUG [StoreOpener-c7783c875957060e5428a4304c2bb71d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/data/hbase/rsgroup/c7783c875957060e5428a4304c2bb71d/m 2023-07-13 03:16:52,247 DEBUG [StoreOpener-c07710fa1db4342318e9b1de545988c8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/data/hbase/namespace/c07710fa1db4342318e9b1de545988c8/info 2023-07-13 03:16:52,247 DEBUG [StoreOpener-c07710fa1db4342318e9b1de545988c8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/data/hbase/namespace/c07710fa1db4342318e9b1de545988c8/info 2023-07-13 03:16:52,247 INFO [StoreOpener-c7783c875957060e5428a4304c2bb71d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region c7783c875957060e5428a4304c2bb71d columnFamilyName m 2023-07-13 03:16:52,247 INFO [StoreOpener-c07710fa1db4342318e9b1de545988c8-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region c07710fa1db4342318e9b1de545988c8 columnFamilyName info 2023-07-13 03:16:52,248 INFO [StoreOpener-c7783c875957060e5428a4304c2bb71d-1] regionserver.HStore(310): Store=c7783c875957060e5428a4304c2bb71d/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 03:16:52,248 INFO [StoreOpener-c07710fa1db4342318e9b1de545988c8-1] regionserver.HStore(310): Store=c07710fa1db4342318e9b1de545988c8/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 03:16:52,248 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/data/hbase/namespace/c07710fa1db4342318e9b1de545988c8 2023-07-13 03:16:52,248 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/data/hbase/rsgroup/c7783c875957060e5428a4304c2bb71d 2023-07-13 03:16:52,249 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/data/hbase/namespace/c07710fa1db4342318e9b1de545988c8 2023-07-13 03:16:52,249 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/data/hbase/rsgroup/c7783c875957060e5428a4304c2bb71d 2023-07-13 03:16:52,251 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for c7783c875957060e5428a4304c2bb71d 2023-07-13 03:16:52,251 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for c07710fa1db4342318e9b1de545988c8 2023-07-13 03:16:52,254 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/data/hbase/namespace/c07710fa1db4342318e9b1de545988c8/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 03:16:52,255 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/data/hbase/rsgroup/c7783c875957060e5428a4304c2bb71d/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 03:16:52,255 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened c07710fa1db4342318e9b1de545988c8; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11268128960, jitterRate=0.04942628741264343}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 03:16:52,255 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for c07710fa1db4342318e9b1de545988c8: 2023-07-13 03:16:52,255 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened c7783c875957060e5428a4304c2bb71d; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@7e4b20b2, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 03:16:52,255 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for c7783c875957060e5428a4304c2bb71d: 2023-07-13 03:16:52,256 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689218211863.c07710fa1db4342318e9b1de545988c8., pid=8, masterSystemTime=1689218212239 2023-07-13 03:16:52,258 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689218212020.c7783c875957060e5428a4304c2bb71d., pid=9, masterSystemTime=1689218212239 2023-07-13 03:16:52,260 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689218211863.c07710fa1db4342318e9b1de545988c8. 2023-07-13 03:16:52,261 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689218211863.c07710fa1db4342318e9b1de545988c8. 2023-07-13 03:16:52,261 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=c07710fa1db4342318e9b1de545988c8, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,36825,1689218210969 2023-07-13 03:16:52,261 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689218212020.c7783c875957060e5428a4304c2bb71d. 2023-07-13 03:16:52,261 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689218211863.c07710fa1db4342318e9b1de545988c8.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689218212261"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689218212261"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689218212261"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689218212261"}]},"ts":"1689218212261"} 2023-07-13 03:16:52,261 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689218212020.c7783c875957060e5428a4304c2bb71d. 2023-07-13 03:16:52,262 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=c7783c875957060e5428a4304c2bb71d, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,34353,1689218211105 2023-07-13 03:16:52,262 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689218212020.c7783c875957060e5428a4304c2bb71d.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689218212262"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689218212262"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689218212262"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689218212262"}]},"ts":"1689218212262"} 2023-07-13 03:16:52,266 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=5 2023-07-13 03:16:52,266 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=5, state=SUCCESS; OpenRegionProcedure c07710fa1db4342318e9b1de545988c8, server=jenkins-hbase20.apache.org,36825,1689218210969 in 177 msec 2023-07-13 03:16:52,270 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=7 2023-07-13 03:16:52,270 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=7, state=SUCCESS; OpenRegionProcedure c7783c875957060e5428a4304c2bb71d, server=jenkins-hbase20.apache.org,34353,1689218211105 in 177 msec 2023-07-13 03:16:52,271 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-07-13 03:16:52,271 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=c07710fa1db4342318e9b1de545988c8, ASSIGN in 323 msec 2023-07-13 03:16:52,272 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=6 2023-07-13 03:16:52,272 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=6, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=c7783c875957060e5428a4304c2bb71d, ASSIGN in 188 msec 2023-07-13 03:16:52,272 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-13 03:16:52,272 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689218212272"}]},"ts":"1689218212272"} 2023-07-13 03:16:52,272 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-13 03:16:52,272 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689218212272"}]},"ts":"1689218212272"} 2023-07-13 03:16:52,273 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-13 03:16:52,273 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-13 03:16:52,275 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-13 03:16:52,275 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-13 03:16:52,276 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 412 msec 2023-07-13 03:16:52,276 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=6, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 255 msec 2023-07-13 03:16:52,332 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase20.apache.org,35861,1689218210690] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-13 03:16:52,332 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase20.apache.org,35861,1689218210690] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-13 03:16:52,336 DEBUG [Listener at localhost.localdomain/45255-EventThread] zookeeper.ZKWatcher(600): master:35861-0x1008454d7cc0000, quorum=127.0.0.1:57116, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 03:16:52,336 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase20.apache.org,35861,1689218210690] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:52,337 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase20.apache.org,35861,1689218210690] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-13 03:16:52,338 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase20.apache.org,35861,1689218210690] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-13 03:16:52,366 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:35861-0x1008454d7cc0000, quorum=127.0.0.1:57116, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-13 03:16:52,367 DEBUG [Listener at localhost.localdomain/45255-EventThread] zookeeper.ZKWatcher(600): master:35861-0x1008454d7cc0000, quorum=127.0.0.1:57116, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-13 03:16:52,367 DEBUG [Listener at localhost.localdomain/45255-EventThread] zookeeper.ZKWatcher(600): master:35861-0x1008454d7cc0000, quorum=127.0.0.1:57116, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 03:16:52,374 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-13 03:16:52,375 INFO [RS-EventLoopGroup-14-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:43480, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-13 03:16:52,378 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-13 03:16:52,397 DEBUG [Listener at localhost.localdomain/45255-EventThread] zookeeper.ZKWatcher(600): master:35861-0x1008454d7cc0000, quorum=127.0.0.1:57116, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-13 03:16:52,406 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 26 msec 2023-07-13 03:16:52,412 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-13 03:16:52,420 DEBUG [Listener at localhost.localdomain/45255-EventThread] zookeeper.ZKWatcher(600): master:35861-0x1008454d7cc0000, quorum=127.0.0.1:57116, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-13 03:16:52,423 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 10 msec 2023-07-13 03:16:52,437 DEBUG [Listener at localhost.localdomain/45255-EventThread] zookeeper.ZKWatcher(600): master:35861-0x1008454d7cc0000, quorum=127.0.0.1:57116, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-13 03:16:52,438 DEBUG [Listener at localhost.localdomain/45255-EventThread] zookeeper.ZKWatcher(600): master:35861-0x1008454d7cc0000, quorum=127.0.0.1:57116, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-13 03:16:52,439 INFO [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.168sec 2023-07-13 03:16:52,439 INFO [master/jenkins-hbase20:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-13 03:16:52,439 INFO [master/jenkins-hbase20:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-13 03:16:52,439 INFO [master/jenkins-hbase20:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-13 03:16:52,439 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,35861,1689218210690-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-13 03:16:52,439 INFO [master/jenkins-hbase20:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,35861,1689218210690-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-13 03:16:52,440 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-13 03:16:52,446 DEBUG [Listener at localhost.localdomain/45255] zookeeper.ReadOnlyZKClient(139): Connect 0x0e1cdbb7 to 127.0.0.1:57116 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-13 03:16:52,454 DEBUG [Listener at localhost.localdomain/45255] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2effc37c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-13 03:16:52,456 DEBUG [hconnection-0x2fd65012-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-13 03:16:52,457 INFO [RS-EventLoopGroup-15-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:42912, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-13 03:16:52,459 INFO [Listener at localhost.localdomain/45255] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase20.apache.org,35861,1689218210690 2023-07-13 03:16:52,459 INFO [Listener at localhost.localdomain/45255] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 03:16:52,462 DEBUG [Listener at localhost.localdomain/45255] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-13 03:16:52,464 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:35152, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-13 03:16:52,467 DEBUG [Listener at localhost.localdomain/45255-EventThread] zookeeper.ZKWatcher(600): master:35861-0x1008454d7cc0000, quorum=127.0.0.1:57116, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-13 03:16:52,467 DEBUG [Listener at localhost.localdomain/45255-EventThread] zookeeper.ZKWatcher(600): master:35861-0x1008454d7cc0000, quorum=127.0.0.1:57116, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 03:16:52,468 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.MasterRpcServices(492): Client=jenkins//148.251.75.209 set balanceSwitch=false 2023-07-13 03:16:52,469 DEBUG [Listener at localhost.localdomain/45255] zookeeper.ReadOnlyZKClient(139): Connect 0x4844a3d1 to 127.0.0.1:57116 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-13 03:16:52,479 DEBUG [Listener at localhost.localdomain/45255] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@550f2c0b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-13 03:16:52,479 INFO [Listener at localhost.localdomain/45255] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:57116 2023-07-13 03:16:52,482 DEBUG [Listener at localhost.localdomain/45255-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:57116, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-13 03:16:52,488 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:52,489 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x1008454d7cc000a connected 2023-07-13 03:16:52,490 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:52,494 INFO [Listener at localhost.localdomain/45255] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-13 03:16:52,509 INFO [Listener at localhost.localdomain/45255] client.ConnectionUtils(127): regionserver/jenkins-hbase20:0 server-side Connection retries=45 2023-07-13 03:16:52,509 INFO [Listener at localhost.localdomain/45255] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 03:16:52,509 INFO [Listener at localhost.localdomain/45255] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-13 03:16:52,509 INFO [Listener at localhost.localdomain/45255] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-13 03:16:52,509 INFO [Listener at localhost.localdomain/45255] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-13 03:16:52,510 INFO [Listener at localhost.localdomain/45255] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-13 03:16:52,510 INFO [Listener at localhost.localdomain/45255] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-13 03:16:52,510 INFO [Listener at localhost.localdomain/45255] ipc.NettyRpcServer(120): Bind to /148.251.75.209:41765 2023-07-13 03:16:52,511 INFO [Listener at localhost.localdomain/45255] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-13 03:16:52,515 DEBUG [Listener at localhost.localdomain/45255] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-13 03:16:52,516 INFO [Listener at localhost.localdomain/45255] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 03:16:52,517 INFO [Listener at localhost.localdomain/45255] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-13 03:16:52,518 INFO [Listener at localhost.localdomain/45255] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:41765 connecting to ZooKeeper ensemble=127.0.0.1:57116 2023-07-13 03:16:52,526 DEBUG [Listener at localhost.localdomain/45255-EventThread] zookeeper.ZKWatcher(600): regionserver:417650x0, quorum=127.0.0.1:57116, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-13 03:16:52,528 DEBUG [Listener at localhost.localdomain/45255] zookeeper.ZKUtil(162): regionserver:417650x0, quorum=127.0.0.1:57116, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-13 03:16:52,529 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:41765-0x1008454d7cc000b connected 2023-07-13 03:16:52,530 DEBUG [Listener at localhost.localdomain/45255] zookeeper.ZKUtil(162): regionserver:41765-0x1008454d7cc000b, quorum=127.0.0.1:57116, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-13 03:16:52,531 DEBUG [Listener at localhost.localdomain/45255] zookeeper.ZKUtil(164): regionserver:41765-0x1008454d7cc000b, quorum=127.0.0.1:57116, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-13 03:16:52,531 DEBUG [Listener at localhost.localdomain/45255] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=41765 2023-07-13 03:16:52,532 DEBUG [Listener at localhost.localdomain/45255] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=41765 2023-07-13 03:16:52,532 DEBUG [Listener at localhost.localdomain/45255] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=41765 2023-07-13 03:16:52,533 DEBUG [Listener at localhost.localdomain/45255] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=41765 2023-07-13 03:16:52,533 DEBUG [Listener at localhost.localdomain/45255] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=41765 2023-07-13 03:16:52,535 INFO [Listener at localhost.localdomain/45255] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-13 03:16:52,535 INFO [Listener at localhost.localdomain/45255] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-13 03:16:52,535 INFO [Listener at localhost.localdomain/45255] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-13 03:16:52,536 INFO [Listener at localhost.localdomain/45255] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-13 03:16:52,536 INFO [Listener at localhost.localdomain/45255] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-13 03:16:52,536 INFO [Listener at localhost.localdomain/45255] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-13 03:16:52,536 INFO [Listener at localhost.localdomain/45255] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-13 03:16:52,536 INFO [Listener at localhost.localdomain/45255] http.HttpServer(1146): Jetty bound to port 38337 2023-07-13 03:16:52,537 INFO [Listener at localhost.localdomain/45255] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-13 03:16:52,542 INFO [Listener at localhost.localdomain/45255] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 03:16:52,542 INFO [Listener at localhost.localdomain/45255] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@558abd96{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f27a0f06-9cf1-37de-e2c0-e7e0a0f251a8/hadoop.log.dir/,AVAILABLE} 2023-07-13 03:16:52,543 INFO [Listener at localhost.localdomain/45255] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 03:16:52,543 INFO [Listener at localhost.localdomain/45255] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@30c62731{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-13 03:16:52,635 INFO [Listener at localhost.localdomain/45255] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-13 03:16:52,636 INFO [Listener at localhost.localdomain/45255] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-13 03:16:52,636 INFO [Listener at localhost.localdomain/45255] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-13 03:16:52,636 INFO [Listener at localhost.localdomain/45255] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-13 03:16:52,637 INFO [Listener at localhost.localdomain/45255] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-13 03:16:52,637 INFO [Listener at localhost.localdomain/45255] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@1940e767{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f27a0f06-9cf1-37de-e2c0-e7e0a0f251a8/java.io.tmpdir/jetty-0_0_0_0-38337-hbase-server-2_4_18-SNAPSHOT_jar-_-any-2683358159244065496/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-13 03:16:52,639 INFO [Listener at localhost.localdomain/45255] server.AbstractConnector(333): Started ServerConnector@1bce2e7b{HTTP/1.1, (http/1.1)}{0.0.0.0:38337} 2023-07-13 03:16:52,639 INFO [Listener at localhost.localdomain/45255] server.Server(415): Started @48171ms 2023-07-13 03:16:52,642 INFO [RS:3;jenkins-hbase20:41765] regionserver.HRegionServer(951): ClusterId : 4d057173-fbac-4b70-a449-849015a43107 2023-07-13 03:16:52,642 DEBUG [RS:3;jenkins-hbase20:41765] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-13 03:16:52,643 DEBUG [RS:3;jenkins-hbase20:41765] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-13 03:16:52,643 DEBUG [RS:3;jenkins-hbase20:41765] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-13 03:16:52,645 DEBUG [RS:3;jenkins-hbase20:41765] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-13 03:16:52,648 DEBUG [RS:3;jenkins-hbase20:41765] zookeeper.ReadOnlyZKClient(139): Connect 0x128a311c to 127.0.0.1:57116 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-13 03:16:52,654 DEBUG [RS:3;jenkins-hbase20:41765] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1c4dcd3d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-13 03:16:52,654 DEBUG [RS:3;jenkins-hbase20:41765] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@18725a6c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-07-13 03:16:52,667 DEBUG [RS:3;jenkins-hbase20:41765] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase20:41765 2023-07-13 03:16:52,667 INFO [RS:3;jenkins-hbase20:41765] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-13 03:16:52,667 INFO [RS:3;jenkins-hbase20:41765] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-13 03:16:52,667 DEBUG [RS:3;jenkins-hbase20:41765] regionserver.HRegionServer(1022): About to register with Master. 2023-07-13 03:16:52,668 INFO [RS:3;jenkins-hbase20:41765] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase20.apache.org,35861,1689218210690 with isa=jenkins-hbase20.apache.org/148.251.75.209:41765, startcode=1689218212508 2023-07-13 03:16:52,668 DEBUG [RS:3;jenkins-hbase20:41765] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-13 03:16:52,671 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:55745, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.10 (auth:SIMPLE), service=RegionServerStatusService 2023-07-13 03:16:52,671 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=35861] master.ServerManager(394): Registering regionserver=jenkins-hbase20.apache.org,41765,1689218212508 2023-07-13 03:16:52,671 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,35861,1689218210690] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-13 03:16:52,672 DEBUG [RS:3;jenkins-hbase20:41765] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8 2023-07-13 03:16:52,672 DEBUG [RS:3;jenkins-hbase20:41765] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost.localdomain:38103 2023-07-13 03:16:52,672 DEBUG [RS:3;jenkins-hbase20:41765] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=38563 2023-07-13 03:16:52,674 DEBUG [Listener at localhost.localdomain/45255-EventThread] zookeeper.ZKWatcher(600): regionserver:34353-0x1008454d7cc0003, quorum=127.0.0.1:57116, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 03:16:52,674 DEBUG [Listener at localhost.localdomain/45255-EventThread] zookeeper.ZKWatcher(600): master:35861-0x1008454d7cc0000, quorum=127.0.0.1:57116, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 03:16:52,675 DEBUG [Listener at localhost.localdomain/45255-EventThread] zookeeper.ZKWatcher(600): regionserver:46211-0x1008454d7cc0001, quorum=127.0.0.1:57116, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 03:16:52,674 DEBUG [Listener at localhost.localdomain/45255-EventThread] zookeeper.ZKWatcher(600): regionserver:36825-0x1008454d7cc0002, quorum=127.0.0.1:57116, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 03:16:52,675 DEBUG [RS:3;jenkins-hbase20:41765] zookeeper.ZKUtil(162): regionserver:41765-0x1008454d7cc000b, quorum=127.0.0.1:57116, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,41765,1689218212508 2023-07-13 03:16:52,675 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,35861,1689218210690] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:52,675 WARN [RS:3;jenkins-hbase20:41765] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-13 03:16:52,675 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46211-0x1008454d7cc0001, quorum=127.0.0.1:57116, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,41765,1689218212508 2023-07-13 03:16:52,675 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34353-0x1008454d7cc0003, quorum=127.0.0.1:57116, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,41765,1689218212508 2023-07-13 03:16:52,675 INFO [RS:3;jenkins-hbase20:41765] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-13 03:16:52,675 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36825-0x1008454d7cc0002, quorum=127.0.0.1:57116, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,41765,1689218212508 2023-07-13 03:16:52,676 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,35861,1689218210690] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-13 03:16:52,676 DEBUG [RS:3;jenkins-hbase20:41765] regionserver.HRegionServer(1948): logDir=hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/WALs/jenkins-hbase20.apache.org,41765,1689218212508 2023-07-13 03:16:52,676 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46211-0x1008454d7cc0001, quorum=127.0.0.1:57116, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,36825,1689218210969 2023-07-13 03:16:52,676 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34353-0x1008454d7cc0003, quorum=127.0.0.1:57116, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,36825,1689218210969 2023-07-13 03:16:52,676 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36825-0x1008454d7cc0002, quorum=127.0.0.1:57116, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,36825,1689218210969 2023-07-13 03:16:52,678 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase20.apache.org,41765,1689218212508] 2023-07-13 03:16:52,678 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46211-0x1008454d7cc0001, quorum=127.0.0.1:57116, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,34353,1689218211105 2023-07-13 03:16:52,678 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,35861,1689218210690] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-13 03:16:52,678 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36825-0x1008454d7cc0002, quorum=127.0.0.1:57116, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,34353,1689218211105 2023-07-13 03:16:52,678 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34353-0x1008454d7cc0003, quorum=127.0.0.1:57116, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,34353,1689218211105 2023-07-13 03:16:52,678 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46211-0x1008454d7cc0001, quorum=127.0.0.1:57116, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,46211,1689218210846 2023-07-13 03:16:52,678 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34353-0x1008454d7cc0003, quorum=127.0.0.1:57116, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,46211,1689218210846 2023-07-13 03:16:52,678 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36825-0x1008454d7cc0002, quorum=127.0.0.1:57116, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,46211,1689218210846 2023-07-13 03:16:52,681 DEBUG [RS:3;jenkins-hbase20:41765] zookeeper.ZKUtil(162): regionserver:41765-0x1008454d7cc000b, quorum=127.0.0.1:57116, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,41765,1689218212508 2023-07-13 03:16:52,681 DEBUG [RS:3;jenkins-hbase20:41765] zookeeper.ZKUtil(162): regionserver:41765-0x1008454d7cc000b, quorum=127.0.0.1:57116, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,36825,1689218210969 2023-07-13 03:16:52,681 DEBUG [RS:3;jenkins-hbase20:41765] zookeeper.ZKUtil(162): regionserver:41765-0x1008454d7cc000b, quorum=127.0.0.1:57116, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,34353,1689218211105 2023-07-13 03:16:52,682 DEBUG [RS:3;jenkins-hbase20:41765] zookeeper.ZKUtil(162): regionserver:41765-0x1008454d7cc000b, quorum=127.0.0.1:57116, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase20.apache.org,46211,1689218210846 2023-07-13 03:16:52,682 DEBUG [RS:3;jenkins-hbase20:41765] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-13 03:16:52,683 INFO [RS:3;jenkins-hbase20:41765] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-13 03:16:52,684 INFO [RS:3;jenkins-hbase20:41765] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-13 03:16:52,684 INFO [RS:3;jenkins-hbase20:41765] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-13 03:16:52,684 INFO [RS:3;jenkins-hbase20:41765] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:52,684 INFO [RS:3;jenkins-hbase20:41765] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-13 03:16:52,686 INFO [RS:3;jenkins-hbase20:41765] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:52,686 DEBUG [RS:3;jenkins-hbase20:41765] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:52,686 DEBUG [RS:3;jenkins-hbase20:41765] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:52,686 DEBUG [RS:3;jenkins-hbase20:41765] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:52,686 DEBUG [RS:3;jenkins-hbase20:41765] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:52,686 DEBUG [RS:3;jenkins-hbase20:41765] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:52,686 DEBUG [RS:3;jenkins-hbase20:41765] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase20:0, corePoolSize=2, maxPoolSize=2 2023-07-13 03:16:52,686 DEBUG [RS:3;jenkins-hbase20:41765] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:52,686 DEBUG [RS:3;jenkins-hbase20:41765] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:52,686 DEBUG [RS:3;jenkins-hbase20:41765] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:52,687 DEBUG [RS:3;jenkins-hbase20:41765] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase20:0, corePoolSize=1, maxPoolSize=1 2023-07-13 03:16:52,688 INFO [RS:3;jenkins-hbase20:41765] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:52,688 INFO [RS:3;jenkins-hbase20:41765] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:52,688 INFO [RS:3;jenkins-hbase20:41765] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:52,701 INFO [RS:3;jenkins-hbase20:41765] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-13 03:16:52,701 INFO [RS:3;jenkins-hbase20:41765] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase20.apache.org,41765,1689218212508-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-13 03:16:52,713 INFO [RS:3;jenkins-hbase20:41765] regionserver.Replication(203): jenkins-hbase20.apache.org,41765,1689218212508 started 2023-07-13 03:16:52,713 INFO [RS:3;jenkins-hbase20:41765] regionserver.HRegionServer(1637): Serving as jenkins-hbase20.apache.org,41765,1689218212508, RpcServer on jenkins-hbase20.apache.org/148.251.75.209:41765, sessionid=0x1008454d7cc000b 2023-07-13 03:16:52,713 DEBUG [RS:3;jenkins-hbase20:41765] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-13 03:16:52,713 DEBUG [RS:3;jenkins-hbase20:41765] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase20.apache.org,41765,1689218212508 2023-07-13 03:16:52,713 DEBUG [RS:3;jenkins-hbase20:41765] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,41765,1689218212508' 2023-07-13 03:16:52,713 DEBUG [RS:3;jenkins-hbase20:41765] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-13 03:16:52,714 DEBUG [RS:3;jenkins-hbase20:41765] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-13 03:16:52,714 DEBUG [RS:3;jenkins-hbase20:41765] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-13 03:16:52,714 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-13 03:16:52,714 DEBUG [RS:3;jenkins-hbase20:41765] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-13 03:16:52,714 DEBUG [RS:3;jenkins-hbase20:41765] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase20.apache.org,41765,1689218212508 2023-07-13 03:16:52,714 DEBUG [RS:3;jenkins-hbase20:41765] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase20.apache.org,41765,1689218212508' 2023-07-13 03:16:52,714 DEBUG [RS:3;jenkins-hbase20:41765] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-13 03:16:52,715 DEBUG [RS:3;jenkins-hbase20:41765] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-13 03:16:52,715 DEBUG [RS:3;jenkins-hbase20:41765] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-13 03:16:52,715 INFO [RS:3;jenkins-hbase20:41765] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-13 03:16:52,715 INFO [RS:3;jenkins-hbase20:41765] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-13 03:16:52,716 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:52,716 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 03:16:52,717 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 03:16:52,718 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 03:16:52,720 DEBUG [hconnection-0x6b328656-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-13 03:16:52,722 INFO [RS-EventLoopGroup-15-2] ipc.ServerRpcConnection(540): Connection from 148.251.75.209:42926, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-13 03:16:52,728 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:52,728 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:52,731 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:35861] to rsgroup master 2023-07-13 03:16:52,731 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:35861 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 03:16:52,731 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] ipc.CallRunner(144): callId: 20 service: MasterService methodName: ExecMasterService size: 119 connection: 148.251.75.209:35152 deadline: 1689219412731, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:35861 is either offline or it does not exist. 2023-07-13 03:16:52,731 WARN [Listener at localhost.localdomain/45255] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:35861 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:35861 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-13 03:16:52,733 INFO [Listener at localhost.localdomain/45255] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 03:16:52,733 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:52,733 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:52,734 INFO [Listener at localhost.localdomain/45255] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:34353, jenkins-hbase20.apache.org:36825, jenkins-hbase20.apache.org:41765, jenkins-hbase20.apache.org:46211], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-13 03:16:52,734 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-13 03:16:52,734 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 03:16:52,780 INFO [Listener at localhost.localdomain/45255] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRSGroupListDoesNotContainFailedTableCreation Thread=556 (was 523) Potentially hanging thread: PacketResponder: BP-248359729-148.251.75.209-1689218209824:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1166624705-2332 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp842260870-2642 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (2049234709) connection to localhost.localdomain/127.0.0.1:38103 from jenkins.hfs.9 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=35861 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server idle connection scanner for port 39119 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-12 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 219363612@qtp-1010168295-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45851 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57116@0x464877d5-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_766413417_17 at /127.0.0.1:40412 [Receiving block BP-248359729-148.251.75.209-1689218209824:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-547-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (2049234709) connection to localhost.localdomain/127.0.0.1:40633 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/MasterData-prefix:jenkins-hbase20.apache.org,35861,1689218210690 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-15-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@78b8b2cc sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor@32e79f11 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor.run(PendingReplicationBlocks.java:244) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor@42c968b5 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor.run(FSNamesystem.java:3842) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp464919042-2304 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=36825 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp464919042-2305 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f27a0f06-9cf1-37de-e2c0-e7e0a0f251a8/cluster_e2bfd937-9134-83af-889e-949b5ecd5c75/dfs/data/data6) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f27a0f06-9cf1-37de-e2c0-e7e0a0f251a8/cluster_e2bfd937-9134-83af-889e-949b5ecd5c75/dfs/data/data3) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: RS-EventLoopGroup-12-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8-prefix:jenkins-hbase20.apache.org,46211,1689218210846 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_766413417_17 at /127.0.0.1:49534 [Receiving block BP-248359729-148.251.75.209-1689218209824:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 32769 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: PacketResponder: BP-248359729-148.251.75.209-1689218209824:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=34353 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=36825 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp854366903-2375 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/982436088.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1102960516_17 at /127.0.0.1:40398 [Receiving block BP-248359729-148.251.75.209-1689218209824:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost.localdomain/45255-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RS:1;jenkins-hbase20:36825 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=35861 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=46211 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_766413417_17 at /127.0.0.1:40384 [Receiving block BP-248359729-148.251.75.209-1689218209824:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f27a0f06-9cf1-37de-e2c0-e7e0a0f251a8/cluster_e2bfd937-9134-83af-889e-949b5ecd5c75/dfs/data/data1) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57116@0x6a629e53-SendThread(127.0.0.1:57116) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: hconnection-0x5f2a94be-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=35861 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: BP-248359729-148.251.75.209-1689218209824 heartbeating to localhost.localdomain/127.0.0.1:38103 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 32769 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1104613310_17 at /127.0.0.1:49502 [Receiving block BP-248359729-148.251.75.209-1689218209824:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp854366903-2372 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/982436088.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 152043876@qtp-1010168295-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=36825 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: LeaseRenewer:jenkins.hfs.9@localhost.localdomain:38103 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x5f2a94be-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-16-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@4b0801fc java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57116@0x7a98c8bc-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: pool-567-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (2049234709) connection to localhost.localdomain/127.0.0.1:40633 from jenkins.hfs.5 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: qtp580492066-2364 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=35861 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: 1190534757@qtp-371920585-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: qtp842260870-2643 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase20:41765 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=41765 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-248359729-148.251.75.209-1689218209824:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost.localdomain/45255-SendThread(127.0.0.1:57116) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp580492066-2367 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57116@0x4844a3d1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1482018670.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1166624705-2334 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=34353 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS:0;jenkins-hbase20:46211-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 39119 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=36825 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=41765 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x5f2a94be-metaLookup-shared--pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=35861 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_766413417_17 at /127.0.0.1:54236 [Receiving block BP-248359729-148.251.75.209-1689218209824:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f27a0f06-9cf1-37de-e2c0-e7e0a0f251a8/cluster_e2bfd937-9134-83af-889e-949b5ecd5c75/dfs/data/data3/current/BP-248359729-148.251.75.209-1689218209824 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-30 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=35861 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS:0;jenkins-hbase20:46211 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=35861 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-8-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-572-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f27a0f06-9cf1-37de-e2c0-e7e0a0f251a8/cluster_e2bfd937-9134-83af-889e-949b5ecd5c75/dfs/data/data2) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: PacketResponder: BP-248359729-148.251.75.209-1689218209824:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=36825 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f27a0f06-9cf1-37de-e2c0-e7e0a0f251a8/cluster_e2bfd937-9134-83af-889e-949b5ecd5c75/dfs/data/data4) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57116@0x0e1cdbb7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1482018670.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1166624705-2337 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=41765 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1102960516_17 at /127.0.0.1:49518 [Receiving block BP-248359729-148.251.75.209-1689218209824:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36825 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: pool-549-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=35861 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-11-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.8@localhost.localdomain:38103 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost.localdomain/45255-SendThread(127.0.0.1:57116) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: nioEventLoopGroup-18-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 3 on default port 32769 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp1166624705-2331-acceptor-0@33860b0e-ServerConnector@5579b584{HTTP/1.1, (http/1.1)}{0.0.0.0:38849} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-553-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp580492066-2360 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/982436088.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.6@localhost.localdomain:40633 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x6b328656-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-552-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34353 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-10-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57116@0x7a98c8bc-SendThread(127.0.0.1:57116) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RS:3;jenkins-hbase20:41765-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost.localdomain/44085-SendThread(127.0.0.1:62986) java.lang.Thread.sleep(Native Method) org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1072) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1139) Potentially hanging thread: RS-EventLoopGroup-15-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_766413417_17 at /127.0.0.1:54196 [Receiving block BP-248359729-148.251.75.209-1689218209824:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor@2f2c7352 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor.run(HeartbeatManager.java:451) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57116@0x4844a3d1-SendThread(127.0.0.1:57116) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: IPC Client (2049234709) connection to localhost.localdomain/127.0.0.1:40633 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1087183359_17 at /127.0.0.1:54152 [Receiving block BP-248359729-148.251.75.209-1689218209824:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1104613310_17 at /127.0.0.1:54206 [Receiving block BP-248359729-148.251.75.209-1689218209824:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-248359729-148.251.75.209-1689218209824:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (2049234709) connection to localhost.localdomain/127.0.0.1:38103 from jenkins.hfs.7 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=34353 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: 4134160@qtp-1134957935-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: qtp464919042-2307 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@435a3883[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-559-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1166624705-2333 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase20:35861 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.master.assignment.AssignmentManager.waitOnAssignQueue(AssignmentManager.java:2102) org.apache.hadoop.hbase.master.assignment.AssignmentManager.processAssignQueue(AssignmentManager.java:2124) org.apache.hadoop.hbase.master.assignment.AssignmentManager.access$600(AssignmentManager.java:104) org.apache.hadoop.hbase.master.assignment.AssignmentManager$1.run(AssignmentManager.java:2064) Potentially hanging thread: IPC Client (2049234709) connection to localhost.localdomain/127.0.0.1:40633 from jenkins.hfs.6 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36825 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57116@0x6a629e53-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57116@0x0e1cdbb7-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp842260870-2638 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/982436088.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber@646a50bf java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber.run(FSNamesystem.java:3975) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 3 on default port 39119 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RS-EventLoopGroup-12-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-11-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp854366903-2373 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/982436088.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f27a0f06-9cf1-37de-e2c0-e7e0a0f251a8/cluster_e2bfd937-9134-83af-889e-949b5ecd5c75/dfs/data/data1/current/BP-248359729-148.251.75.209-1689218209824 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1102960516_17 at /127.0.0.1:54222 [Receiving block BP-248359729-148.251.75.209-1689218209824:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8-prefix:jenkins-hbase20.apache.org,34353,1689218211105.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-35 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: 415162824@qtp-1134957935-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:38327 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: Listener at localhost.localdomain/45255-SendThread(127.0.0.1:57116) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: Listener at localhost.localdomain/45255-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: Timer-24 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RS:2;jenkins-hbase20:34353-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-27 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: 388262572@qtp-1826679281-0 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:40805 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: Listener at localhost.localdomain/45255.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: hconnection-0x5f2a94be-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 7272932@qtp-1826679281-1 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: IPC Server handler 4 on default port 45255 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: M:0;jenkins-hbase20:35861 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.master.HMaster.waitForMasterActive(HMaster.java:634) org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:957) org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:904) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1006) org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:541) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-14-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-15 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-14-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62986@0x7782f490-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: Listener at localhost.localdomain/45255 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase20:34353Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp842260870-2640 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (2049234709) connection to localhost.localdomain/127.0.0.1:38103 from jenkins.hfs.10 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=34353 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57116@0x4844a3d1-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp506763230-2272 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase20:36825Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp580492066-2366 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-248359729-148.251.75.209-1689218209824:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp506763230-2275 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-12-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.5@localhost.localdomain:40633 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp506763230-2273 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2fd65012-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp506763230-2276 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost.localdomain/45255-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1104613310_17 at /127.0.0.1:40392 [Receiving block BP-248359729-148.251.75.209-1689218209824:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-13-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (2049234709) connection to localhost.localdomain/127.0.0.1:38103 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=41765 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: LeaseRenewer:jenkins.hfs.4@localhost.localdomain:40633 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase20:41765Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-14-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1087183359_17 at /127.0.0.1:49480 [Receiving block BP-248359729-148.251.75.209-1689218209824:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp506763230-2269 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/982436088.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 38103 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: PacketResponder: BP-248359729-148.251.75.209-1689218209824:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp842260870-2641 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46211 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=46211 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor@62b0095 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:528) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f27a0f06-9cf1-37de-e2c0-e7e0a0f251a8/cluster_e2bfd937-9134-83af-889e-949b5ecd5c75/dfs/data/data4/current/BP-248359729-148.251.75.209-1689218209824 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=36825 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost.localdomain/45255.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: IPC Server handler 3 on default port 45255 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RS-EventLoopGroup-16-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34353 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57116@0x464877d5-SendThread(127.0.0.1:57116) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp842260870-2645 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-5dd8de02-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-554-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f27a0f06-9cf1-37de-e2c0-e7e0a0f251a8/cluster_e2bfd937-9134-83af-889e-949b5ecd5c75/dfs/data/data5/current/BP-248359729-148.251.75.209-1689218209824 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=41765 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=41765 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57116@0x464877d5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1482018670.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57116@0x11e8d79e-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=41765 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=34353 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 0 on default port 39119 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp464919042-2302 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8-prefix:jenkins-hbase20.apache.org,36825,1689218210969 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp506763230-2270-acceptor-0@30d9f419-ServerConnector@54a66b85{HTTP/1.1, (http/1.1)}{0.0.0.0:38563} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 32769 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62986@0x7782f490-SendThread(127.0.0.1:62986) java.lang.Thread.sleep(Native Method) org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1072) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1139) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57116@0x128a311c sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1482018670.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost.localdomain/45255-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41765 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1689218211426 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.PriorityBlockingQueue.take(PriorityBlockingQueue.java:549) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$2.run(HFileCleaner.java:251) Potentially hanging thread: Listener at localhost.localdomain/45255-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: BP-248359729-148.251.75.209-1689218209824 heartbeating to localhost.localdomain/127.0.0.1:38103 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 38103 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: IPC Server handler 1 on default port 45255 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f27a0f06-9cf1-37de-e2c0-e7e0a0f251a8/cluster_e2bfd937-9134-83af-889e-949b5ecd5c75/dfs/data/data5) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: qtp854366903-2377 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f27a0f06-9cf1-37de-e2c0-e7e0a0f251a8/cluster_e2bfd937-9134-83af-889e-949b5ecd5c75/dfs/data/data6/current/BP-248359729-148.251.75.209-1689218209824 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=36825 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=35861 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: LeaseRenewer:jenkins@localhost.localdomain:40633 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_766413417_17 at /127.0.0.1:54114 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-29 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp1166624705-2330 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/982436088.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-248359729-148.251.75.209-1689218209824:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@7fdaf8e4 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 32769 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1689218211425 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$1.run(HFileCleaner.java:236) Potentially hanging thread: RS:2;jenkins-hbase20:34353 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x5f2a94be-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x5f2a94be-shared-pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-248359729-148.251.75.209-1689218209824:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: BP-248359729-148.251.75.209-1689218209824 heartbeating to localhost.localdomain/127.0.0.1:38103 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57116@0x0e1cdbb7-SendThread(127.0.0.1:57116) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: jenkins-hbase20:46211Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_766413417_17 at /127.0.0.1:49496 [Receiving block BP-248359729-148.251.75.209-1689218209824:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=36825 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 4 on default port 39119 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: PacketResponder: BP-248359729-148.251.75.209-1689218209824:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-34 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller@4309815e java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller.run(FSNamesystem.java:3884) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-568-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-18e71712-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp580492066-2365 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1166624705-2336 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost.localdomain/45255-SendThread(127.0.0.1:57116) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp854366903-2379 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 45255 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: ProcessThread(sid:0 cport:57116): sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.server.PrepRequestProcessor.run(PrepRequestProcessor.java:134) Potentially hanging thread: qtp854366903-2374 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/982436088.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-248359729-148.251.75.209-1689218209824:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34353 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@2cfa6af5 sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 38103 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=34353 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp842260870-2639-acceptor-0@6244df2e-ServerConnector@1bce2e7b{HTTP/1.1, (http/1.1)}{0.0.0.0:38337} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: NIOServerCxnFactory.AcceptThread:localhost/127.0.0.1:57116 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.zookeeper.server.NIOServerCnxnFactory$AcceptThread.select(NIOServerCnxnFactory.java:229) org.apache.zookeeper.server.NIOServerCnxnFactory$AcceptThread.run(NIOServerCnxnFactory.java:205) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=46211 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46211 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 1 on default port 39119 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57116@0x128a311c-SendThread(127.0.0.1:57116) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp506763230-2274 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41765 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-15-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp854366903-2378 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-10-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-248359729-148.251.75.209-1689218209824:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp580492066-2362 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp854366903-2376-acceptor-0@6fbe1e1e-ServerConnector@3c2b629a{HTTP/1.1, (http/1.1)}{0.0.0.0:46459} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp464919042-2300 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/982436088.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-11-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ForkJoinPool-2-worker-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=46211 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-16 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (2049234709) connection to localhost.localdomain/127.0.0.1:38103 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: qtp464919042-2301-acceptor-0@68de035a-ServerConnector@5406abae{HTTP/1.1, (http/1.1)}{0.0.0.0:45379} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1166624705-2335 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,39355,1689218203363 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: Listener at localhost.localdomain/45255.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@3685d3ed java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-14 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-248359729-148.251.75.209-1689218209824:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost.localdomain/45255.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1087183359_17 at /127.0.0.1:40374 [Receiving block BP-248359729-148.251.75.209-1689218209824:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x6b328656-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins@localhost.localdomain:38103 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost.localdomain/44085-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp580492066-2363 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46211 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost.localdomain/45255-SendThread(127.0.0.1:57116) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-13 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 45255 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp842260870-2644 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1102960516_17 at /127.0.0.1:49580 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57116@0x11e8d79e-SendThread(127.0.0.1:57116) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: nioEventLoopGroup-16-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:1;jenkins-hbase20:36825-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=34353 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1087183359_17 at /127.0.0.1:40358 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp580492066-2361-acceptor-0@6e5bd7fc-ServerConnector@40ba18c9{HTTP/1.1, (http/1.1)}{0.0.0.0:36639} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer for 'DataNode' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: Timer-33 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: IPC Server handler 0 on default port 45255 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: IPC Client (2049234709) connection to localhost.localdomain/127.0.0.1:40633 from jenkins.hfs.4 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RS-EventLoopGroup-14-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@306d93ff[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8-prefix:jenkins-hbase20.apache.org,34353,1689218211105 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41765 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@51705ed5[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57116@0x6a629e53 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1482018670.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (2049234709) connection to localhost.localdomain/127.0.0.1:38103 from jenkins.hfs.8 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: IPC Server handler 3 on default port 38103 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: hconnection-0x5f2a94be-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x5f2a94be-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ForkJoinPool-2-worker-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57116@0x7a98c8bc sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1482018670.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp464919042-2306 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-248359729-148.251.75.209-1689218209824:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57116@0x11e8d79e sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1482018670.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-7126916d-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-28 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57116@0x128a311c-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: pool-558-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@5515afa7 sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost.localdomain/45255-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f27a0f06-9cf1-37de-e2c0-e7e0a0f251a8/cluster_e2bfd937-9134-83af-889e-949b5ecd5c75/dfs/data/data2/current/BP-248359729-148.251.75.209-1689218209824 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62986@0x7782f490 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1482018670.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=46211 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Timer-32 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: 1823335192@qtp-371920585-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:39407 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: pool-563-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-248359729-148.251.75.209-1689218209824:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-26 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: Timer-25 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=46211 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: CacheReplicationMonitor(340966279) sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163) org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor.run(CacheReplicationMonitor.java:181) Potentially hanging thread: LeaseRenewer:jenkins.hfs.7@localhost.localdomain:38103 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ForkJoinPool-2-worker-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: IPC Server handler 2 on default port 38103 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: Timer-31 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: IPC Server handler 1 on default port 38103 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RS-EventLoopGroup-10-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost.localdomain/45255-SendThread(127.0.0.1:57116) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase20.apache.org,35861,1689218210690 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@2e45a313 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 32769 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=46211 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp464919042-2303 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp506763230-2271 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-3a801dc9-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-22baccb7-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=818 (was 831), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=516 (was 490) - SystemLoadAverage LEAK? -, ProcessCount=170 (was 170), AvailableMemoryMB=3249 (was 3485) 2023-07-13 03:16:52,782 WARN [Listener at localhost.localdomain/45255] hbase.ResourceChecker(130): Thread=556 is superior to 500 2023-07-13 03:16:52,797 INFO [Listener at localhost.localdomain/45255] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNotMoveTableToNullRSGroupWhenCreatingExistingTable Thread=556, OpenFileDescriptor=818, MaxFileDescriptor=60000, SystemLoadAverage=483, ProcessCount=170, AvailableMemoryMB=3248 2023-07-13 03:16:52,797 WARN [Listener at localhost.localdomain/45255] hbase.ResourceChecker(130): Thread=556 is superior to 500 2023-07-13 03:16:52,797 INFO [Listener at localhost.localdomain/45255] rsgroup.TestRSGroupsBase(132): testNotMoveTableToNullRSGroupWhenCreatingExistingTable 2023-07-13 03:16:52,801 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:52,801 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:52,802 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-13 03:16:52,802 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 03:16:52,802 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-13 03:16:52,803 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-13 03:16:52,803 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-13 03:16:52,804 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup master 2023-07-13 03:16:52,807 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:52,807 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-13 03:16:52,808 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 03:16:52,812 INFO [Listener at localhost.localdomain/45255] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-13 03:16:52,812 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-13 03:16:52,814 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:52,815 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 03:16:52,817 INFO [RS:3;jenkins-hbase20:41765] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase20.apache.org%2C41765%2C1689218212508, suffix=, logDir=hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/WALs/jenkins-hbase20.apache.org,41765,1689218212508, archiveDir=hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/oldWALs, maxLogs=32 2023-07-13 03:16:52,828 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 03:16:52,831 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 03:16:52,840 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:52,842 DEBUG [RS-EventLoopGroup-16-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37667,DS-fa8e52be-3008-4bea-95b5-c288d99d0c25,DISK] 2023-07-13 03:16:52,842 DEBUG [RS-EventLoopGroup-16-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34961,DS-0dcdd7d6-4135-450d-a1b0-58a88351bb0c,DISK] 2023-07-13 03:16:52,842 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:52,846 DEBUG [RS-EventLoopGroup-16-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38005,DS-d810f180-5a5e-4d9f-9cc8-02217874441c,DISK] 2023-07-13 03:16:52,848 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:35861] to rsgroup master 2023-07-13 03:16:52,848 INFO [RS:3;jenkins-hbase20:41765] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/WALs/jenkins-hbase20.apache.org,41765,1689218212508/jenkins-hbase20.apache.org%2C41765%2C1689218212508.1689218212817 2023-07-13 03:16:52,848 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:35861 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 03:16:52,848 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] ipc.CallRunner(144): callId: 48 service: MasterService methodName: ExecMasterService size: 119 connection: 148.251.75.209:35152 deadline: 1689219412848, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:35861 is either offline or it does not exist. 2023-07-13 03:16:52,848 DEBUG [RS:3;jenkins-hbase20:41765] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34961,DS-0dcdd7d6-4135-450d-a1b0-58a88351bb0c,DISK], DatanodeInfoWithStorage[127.0.0.1:38005,DS-d810f180-5a5e-4d9f-9cc8-02217874441c,DISK], DatanodeInfoWithStorage[127.0.0.1:37667,DS-fa8e52be-3008-4bea-95b5-c288d99d0c25,DISK]] 2023-07-13 03:16:52,849 WARN [Listener at localhost.localdomain/45255] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:35861 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:35861 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-13 03:16:52,850 INFO [Listener at localhost.localdomain/45255] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 03:16:52,850 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:52,851 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:52,851 INFO [Listener at localhost.localdomain/45255] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:34353, jenkins-hbase20.apache.org:36825, jenkins-hbase20.apache.org:41765, jenkins-hbase20.apache.org:46211], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-13 03:16:52,851 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-13 03:16:52,851 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 03:16:52,853 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.HMaster$4(2112): Client=jenkins//148.251.75.209 create 't1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-13 03:16:52,853 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=t1 2023-07-13 03:16:52,855 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-13 03:16:52,855 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.MasterRpcServices(700): Client=jenkins//148.251.75.209 procedure request for creating table: namespace: "default" qualifier: "t1" procId is: 12 2023-07-13 03:16:52,856 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-13 03:16:52,858 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:52,859 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 03:16:52,859 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 03:16:52,860 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-13 03:16:52,862 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/.tmp/data/default/t1/717b687fbac182465427dfb62b22f956 2023-07-13 03:16:52,862 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/.tmp/data/default/t1/717b687fbac182465427dfb62b22f956 empty. 2023-07-13 03:16:52,863 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/.tmp/data/default/t1/717b687fbac182465427dfb62b22f956 2023-07-13 03:16:52,863 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived t1 regions 2023-07-13 03:16:52,876 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/.tmp/data/default/t1/.tabledesc/.tableinfo.0000000001 2023-07-13 03:16:52,877 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(7675): creating {ENCODED => 717b687fbac182465427dfb62b22f956, NAME => 't1,,1689218212852.717b687fbac182465427dfb62b22f956.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='t1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/.tmp 2023-07-13 03:16:52,893 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(866): Instantiated t1,,1689218212852.717b687fbac182465427dfb62b22f956.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 03:16:52,893 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1604): Closing 717b687fbac182465427dfb62b22f956, disabling compactions & flushes 2023-07-13 03:16:52,893 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1626): Closing region t1,,1689218212852.717b687fbac182465427dfb62b22f956. 2023-07-13 03:16:52,894 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on t1,,1689218212852.717b687fbac182465427dfb62b22f956. 2023-07-13 03:16:52,894 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1714): Acquired close lock on t1,,1689218212852.717b687fbac182465427dfb62b22f956. after waiting 0 ms 2023-07-13 03:16:52,894 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1724): Updates disabled for region t1,,1689218212852.717b687fbac182465427dfb62b22f956. 2023-07-13 03:16:52,894 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1838): Closed t1,,1689218212852.717b687fbac182465427dfb62b22f956. 2023-07-13 03:16:52,894 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1558): Region close journal for 717b687fbac182465427dfb62b22f956: 2023-07-13 03:16:52,896 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_ADD_TO_META 2023-07-13 03:16:52,896 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"t1,,1689218212852.717b687fbac182465427dfb62b22f956.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689218212896"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689218212896"}]},"ts":"1689218212896"} 2023-07-13 03:16:52,897 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-13 03:16:52,898 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-13 03:16:52,898 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689218212898"}]},"ts":"1689218212898"} 2023-07-13 03:16:52,899 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=ENABLING in hbase:meta 2023-07-13 03:16:52,901 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase20.apache.org=0} racks are {/default-rack=0} 2023-07-13 03:16:52,901 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-13 03:16:52,901 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-13 03:16:52,901 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-13 03:16:52,901 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-13 03:16:52,901 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-13 03:16:52,901 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=t1, region=717b687fbac182465427dfb62b22f956, ASSIGN}] 2023-07-13 03:16:52,902 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=t1, region=717b687fbac182465427dfb62b22f956, ASSIGN 2023-07-13 03:16:52,903 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=t1, region=717b687fbac182465427dfb62b22f956, ASSIGN; state=OFFLINE, location=jenkins-hbase20.apache.org,36825,1689218210969; forceNewPlan=false, retain=false 2023-07-13 03:16:52,957 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-13 03:16:53,053 INFO [jenkins-hbase20:35861] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-13 03:16:53,055 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=717b687fbac182465427dfb62b22f956, regionState=OPENING, regionLocation=jenkins-hbase20.apache.org,36825,1689218210969 2023-07-13 03:16:53,056 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"t1,,1689218212852.717b687fbac182465427dfb62b22f956.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689218213055"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689218213055"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689218213055"}]},"ts":"1689218213055"} 2023-07-13 03:16:53,063 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=13, state=RUNNABLE; OpenRegionProcedure 717b687fbac182465427dfb62b22f956, server=jenkins-hbase20.apache.org,36825,1689218210969}] 2023-07-13 03:16:53,158 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-13 03:16:53,224 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(130): Open t1,,1689218212852.717b687fbac182465427dfb62b22f956. 2023-07-13 03:16:53,224 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 717b687fbac182465427dfb62b22f956, NAME => 't1,,1689218212852.717b687fbac182465427dfb62b22f956.', STARTKEY => '', ENDKEY => ''} 2023-07-13 03:16:53,224 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table t1 717b687fbac182465427dfb62b22f956 2023-07-13 03:16:53,225 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(866): Instantiated t1,,1689218212852.717b687fbac182465427dfb62b22f956.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-13 03:16:53,225 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7894): checking encryption for 717b687fbac182465427dfb62b22f956 2023-07-13 03:16:53,225 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(7897): checking classloading for 717b687fbac182465427dfb62b22f956 2023-07-13 03:16:53,227 INFO [StoreOpener-717b687fbac182465427dfb62b22f956-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family cf1 of region 717b687fbac182465427dfb62b22f956 2023-07-13 03:16:53,229 DEBUG [StoreOpener-717b687fbac182465427dfb62b22f956-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/data/default/t1/717b687fbac182465427dfb62b22f956/cf1 2023-07-13 03:16:53,229 DEBUG [StoreOpener-717b687fbac182465427dfb62b22f956-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/data/default/t1/717b687fbac182465427dfb62b22f956/cf1 2023-07-13 03:16:53,229 INFO [StoreOpener-717b687fbac182465427dfb62b22f956-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 717b687fbac182465427dfb62b22f956 columnFamilyName cf1 2023-07-13 03:16:53,230 INFO [StoreOpener-717b687fbac182465427dfb62b22f956-1] regionserver.HStore(310): Store=717b687fbac182465427dfb62b22f956/cf1, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-13 03:16:53,231 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/data/default/t1/717b687fbac182465427dfb62b22f956 2023-07-13 03:16:53,232 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/data/default/t1/717b687fbac182465427dfb62b22f956 2023-07-13 03:16:53,236 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1055): writing seq id for 717b687fbac182465427dfb62b22f956 2023-07-13 03:16:53,240 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/data/default/t1/717b687fbac182465427dfb62b22f956/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-13 03:16:53,241 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1072): Opened 717b687fbac182465427dfb62b22f956; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9560422560, jitterRate=-0.10961626470088959}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-13 03:16:53,241 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(965): Region open journal for 717b687fbac182465427dfb62b22f956: 2023-07-13 03:16:53,242 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for t1,,1689218212852.717b687fbac182465427dfb62b22f956., pid=14, masterSystemTime=1689218213217 2023-07-13 03:16:53,244 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for t1,,1689218212852.717b687fbac182465427dfb62b22f956. 2023-07-13 03:16:53,244 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase20:0-0] handler.AssignRegionHandler(158): Opened t1,,1689218212852.717b687fbac182465427dfb62b22f956. 2023-07-13 03:16:53,244 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=717b687fbac182465427dfb62b22f956, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase20.apache.org,36825,1689218210969 2023-07-13 03:16:53,244 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"t1,,1689218212852.717b687fbac182465427dfb62b22f956.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689218213244"},{"qualifier":"server","vlen":32,"tag":[],"timestamp":"1689218213244"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689218213244"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689218213244"}]},"ts":"1689218213244"} 2023-07-13 03:16:53,247 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=13 2023-07-13 03:16:53,247 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=13, state=SUCCESS; OpenRegionProcedure 717b687fbac182465427dfb62b22f956, server=jenkins-hbase20.apache.org,36825,1689218210969 in 183 msec 2023-07-13 03:16:53,249 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-13 03:16:53,249 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=t1, region=717b687fbac182465427dfb62b22f956, ASSIGN in 346 msec 2023-07-13 03:16:53,249 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-13 03:16:53,249 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689218213249"}]},"ts":"1689218213249"} 2023-07-13 03:16:53,250 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=ENABLED in hbase:meta 2023-07-13 03:16:53,252 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_POST_OPERATION 2023-07-13 03:16:53,254 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=t1 in 399 msec 2023-07-13 03:16:53,460 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-13 03:16:53,460 INFO [Listener at localhost.localdomain/45255] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:t1, procId: 12 completed 2023-07-13 03:16:53,460 DEBUG [Listener at localhost.localdomain/45255] hbase.HBaseTestingUtility(3430): Waiting until all regions of table t1 get assigned. Timeout = 60000ms 2023-07-13 03:16:53,460 INFO [Listener at localhost.localdomain/45255] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 03:16:53,463 INFO [Listener at localhost.localdomain/45255] hbase.HBaseTestingUtility(3484): All regions for table t1 assigned to meta. Checking AM states. 2023-07-13 03:16:53,463 INFO [Listener at localhost.localdomain/45255] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 03:16:53,463 INFO [Listener at localhost.localdomain/45255] hbase.HBaseTestingUtility(3504): All regions for table t1 assigned. 2023-07-13 03:16:53,465 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.HMaster$4(2112): Client=jenkins//148.251.75.209 create 't1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-13 03:16:53,466 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] procedure2.ProcedureExecutor(1029): Stored pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=t1 2023-07-13 03:16:53,468 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-13 03:16:53,468 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.TableExistsException: t1 at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.prepareCreate(CreateTableProcedure.java:243) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:85) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:53) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:188) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:922) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1646) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1392) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$1100(ProcedureExecutor.java:73) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1964) 2023-07-13 03:16:53,470 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] ipc.CallRunner(144): callId: 65 service: MasterService methodName: CreateTable size: 354 connection: 148.251.75.209:35152 deadline: 1689218273464, exception=org.apache.hadoop.hbase.TableExistsException: t1 2023-07-13 03:16:53,471 INFO [Listener at localhost.localdomain/45255] hbase.Waiter(180): Waiting up to [5,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 03:16:53,482 INFO [PEWorker-1] procedure2.ProcedureExecutor(1528): Rolled back pid=15, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.TableExistsException via master-create-table:org.apache.hadoop.hbase.TableExistsException: t1; CreateTableProcedure table=t1 exec-time=16 msec 2023-07-13 03:16:53,572 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-13 03:16:53,572 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 03:16:53,573 INFO [Listener at localhost.localdomain/45255] client.HBaseAdmin$15(890): Started disable of t1 2023-07-13 03:16:53,573 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.HMaster$11(2418): Client=jenkins//148.251.75.209 disable t1 2023-07-13 03:16:53,574 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] procedure2.ProcedureExecutor(1029): Stored pid=16, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=t1 2023-07-13 03:16:53,577 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-13 03:16:53,578 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689218213578"}]},"ts":"1689218213578"} 2023-07-13 03:16:53,580 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=DISABLING in hbase:meta 2023-07-13 03:16:53,581 INFO [PEWorker-2] procedure.DisableTableProcedure(293): Set t1 to state=DISABLING 2023-07-13 03:16:53,582 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=t1, region=717b687fbac182465427dfb62b22f956, UNASSIGN}] 2023-07-13 03:16:53,582 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=t1, region=717b687fbac182465427dfb62b22f956, UNASSIGN 2023-07-13 03:16:53,583 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=717b687fbac182465427dfb62b22f956, regionState=CLOSING, regionLocation=jenkins-hbase20.apache.org,36825,1689218210969 2023-07-13 03:16:53,583 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"t1,,1689218212852.717b687fbac182465427dfb62b22f956.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689218213583"},{"qualifier":"sn","vlen":46,"tag":[],"timestamp":"1689218213583"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689218213583"}]},"ts":"1689218213583"} 2023-07-13 03:16:53,590 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=17, state=RUNNABLE; CloseRegionProcedure 717b687fbac182465427dfb62b22f956, server=jenkins-hbase20.apache.org,36825,1689218210969}] 2023-07-13 03:16:53,678 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-13 03:16:53,742 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(111): Close 717b687fbac182465427dfb62b22f956 2023-07-13 03:16:53,742 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 717b687fbac182465427dfb62b22f956, disabling compactions & flushes 2023-07-13 03:16:53,742 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region t1,,1689218212852.717b687fbac182465427dfb62b22f956. 2023-07-13 03:16:53,742 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on t1,,1689218212852.717b687fbac182465427dfb62b22f956. 2023-07-13 03:16:53,742 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on t1,,1689218212852.717b687fbac182465427dfb62b22f956. after waiting 0 ms 2023-07-13 03:16:53,742 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region t1,,1689218212852.717b687fbac182465427dfb62b22f956. 2023-07-13 03:16:53,747 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/data/default/t1/717b687fbac182465427dfb62b22f956/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-13 03:16:53,748 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed t1,,1689218212852.717b687fbac182465427dfb62b22f956. 2023-07-13 03:16:53,748 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 717b687fbac182465427dfb62b22f956: 2023-07-13 03:16:53,750 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.UnassignRegionHandler(149): Closed 717b687fbac182465427dfb62b22f956 2023-07-13 03:16:53,750 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=717b687fbac182465427dfb62b22f956, regionState=CLOSED 2023-07-13 03:16:53,750 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"t1,,1689218212852.717b687fbac182465427dfb62b22f956.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689218213750"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689218213750"}]},"ts":"1689218213750"} 2023-07-13 03:16:53,752 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=17 2023-07-13 03:16:53,752 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=17, state=SUCCESS; CloseRegionProcedure 717b687fbac182465427dfb62b22f956, server=jenkins-hbase20.apache.org,36825,1689218210969 in 161 msec 2023-07-13 03:16:53,753 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=16 2023-07-13 03:16:53,753 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=16, state=SUCCESS; TransitRegionStateProcedure table=t1, region=717b687fbac182465427dfb62b22f956, UNASSIGN in 170 msec 2023-07-13 03:16:53,754 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689218213754"}]},"ts":"1689218213754"} 2023-07-13 03:16:53,755 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=DISABLED in hbase:meta 2023-07-13 03:16:53,758 INFO [PEWorker-2] procedure.DisableTableProcedure(305): Set t1 to state=DISABLED 2023-07-13 03:16:53,760 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=16, state=SUCCESS; DisableTableProcedure table=t1 in 185 msec 2023-07-13 03:16:53,880 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-13 03:16:53,880 INFO [Listener at localhost.localdomain/45255] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:t1, procId: 16 completed 2023-07-13 03:16:53,881 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.HMaster$5(2228): Client=jenkins//148.251.75.209 delete t1 2023-07-13 03:16:53,883 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] procedure2.ProcedureExecutor(1029): Stored pid=19, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=t1 2023-07-13 03:16:53,886 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=19, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=t1 2023-07-13 03:16:53,886 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 't1' from rsgroup 'default' 2023-07-13 03:16:53,887 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=19, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=t1 2023-07-13 03:16:53,888 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:53,889 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 03:16:53,889 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 03:16:53,890 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/.tmp/data/default/t1/717b687fbac182465427dfb62b22f956 2023-07-13 03:16:53,892 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/.tmp/data/default/t1/717b687fbac182465427dfb62b22f956/cf1, FileablePath, hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/.tmp/data/default/t1/717b687fbac182465427dfb62b22f956/recovered.edits] 2023-07-13 03:16:53,893 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-13 03:16:53,898 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/.tmp/data/default/t1/717b687fbac182465427dfb62b22f956/recovered.edits/4.seqid to hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/archive/data/default/t1/717b687fbac182465427dfb62b22f956/recovered.edits/4.seqid 2023-07-13 03:16:53,898 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/.tmp/data/default/t1/717b687fbac182465427dfb62b22f956 2023-07-13 03:16:53,899 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived t1 regions 2023-07-13 03:16:53,901 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=19, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=t1 2023-07-13 03:16:53,903 WARN [PEWorker-3] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of t1 from hbase:meta 2023-07-13 03:16:53,904 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(421): Removing 't1' descriptor. 2023-07-13 03:16:53,905 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=19, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=t1 2023-07-13 03:16:53,906 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(411): Removing 't1' from region states. 2023-07-13 03:16:53,906 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"t1,,1689218212852.717b687fbac182465427dfb62b22f956.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689218213906"}]},"ts":"9223372036854775807"} 2023-07-13 03:16:53,907 INFO [PEWorker-3] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-13 03:16:53,907 DEBUG [PEWorker-3] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 717b687fbac182465427dfb62b22f956, NAME => 't1,,1689218212852.717b687fbac182465427dfb62b22f956.', STARTKEY => '', ENDKEY => ''}] 2023-07-13 03:16:53,907 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(415): Marking 't1' as deleted. 2023-07-13 03:16:53,907 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689218213907"}]},"ts":"9223372036854775807"} 2023-07-13 03:16:53,909 INFO [PEWorker-3] hbase.MetaTableAccessor(1658): Deleted table t1 state from META 2023-07-13 03:16:53,911 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(130): Finished pid=19, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=t1 2023-07-13 03:16:53,912 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=19, state=SUCCESS; DeleteTableProcedure table=t1 in 29 msec 2023-07-13 03:16:53,994 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-13 03:16:53,994 INFO [Listener at localhost.localdomain/45255] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:t1, procId: 19 completed 2023-07-13 03:16:53,998 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:53,998 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:53,999 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-13 03:16:53,999 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 03:16:53,999 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-13 03:16:54,000 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-13 03:16:54,000 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-13 03:16:54,000 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup master 2023-07-13 03:16:54,003 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:54,003 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-13 03:16:54,004 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 03:16:54,006 INFO [Listener at localhost.localdomain/45255] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-13 03:16:54,007 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-13 03:16:54,008 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:54,009 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 03:16:54,010 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 03:16:54,010 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 03:16:54,013 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:54,013 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:54,015 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:35861] to rsgroup master 2023-07-13 03:16:54,015 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:35861 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 03:16:54,015 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] ipc.CallRunner(144): callId: 105 service: MasterService methodName: ExecMasterService size: 119 connection: 148.251.75.209:35152 deadline: 1689219414015, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:35861 is either offline or it does not exist. 2023-07-13 03:16:54,016 WARN [Listener at localhost.localdomain/45255] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:35861 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:35861 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-13 03:16:54,019 INFO [Listener at localhost.localdomain/45255] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 03:16:54,020 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:54,020 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:54,021 INFO [Listener at localhost.localdomain/45255] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:34353, jenkins-hbase20.apache.org:36825, jenkins-hbase20.apache.org:41765, jenkins-hbase20.apache.org:46211], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-13 03:16:54,021 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-13 03:16:54,022 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 03:16:54,043 INFO [Listener at localhost.localdomain/45255] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNotMoveTableToNullRSGroupWhenCreatingExistingTable Thread=570 (was 556) - Thread LEAK? -, OpenFileDescriptor=833 (was 818) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=483 (was 483), ProcessCount=170 (was 170), AvailableMemoryMB=3240 (was 3248) 2023-07-13 03:16:54,043 WARN [Listener at localhost.localdomain/45255] hbase.ResourceChecker(130): Thread=570 is superior to 500 2023-07-13 03:16:54,058 INFO [Listener at localhost.localdomain/45255] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNonExistentTableMove Thread=570, OpenFileDescriptor=833, MaxFileDescriptor=60000, SystemLoadAverage=483, ProcessCount=170, AvailableMemoryMB=3240 2023-07-13 03:16:54,058 WARN [Listener at localhost.localdomain/45255] hbase.ResourceChecker(130): Thread=570 is superior to 500 2023-07-13 03:16:54,058 INFO [Listener at localhost.localdomain/45255] rsgroup.TestRSGroupsBase(132): testNonExistentTableMove 2023-07-13 03:16:54,062 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:54,062 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:54,063 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-13 03:16:54,063 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 03:16:54,063 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-13 03:16:54,066 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-13 03:16:54,066 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-13 03:16:54,067 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup master 2023-07-13 03:16:54,071 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:54,071 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-13 03:16:54,072 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 03:16:54,075 INFO [Listener at localhost.localdomain/45255] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-13 03:16:54,075 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-13 03:16:54,077 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:54,077 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 03:16:54,087 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 03:16:54,088 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 03:16:54,090 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:54,091 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:54,093 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:35861] to rsgroup master 2023-07-13 03:16:54,093 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:35861 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 03:16:54,093 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] ipc.CallRunner(144): callId: 133 service: MasterService methodName: ExecMasterService size: 120 connection: 148.251.75.209:35152 deadline: 1689219414093, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:35861 is either offline or it does not exist. 2023-07-13 03:16:54,094 WARN [Listener at localhost.localdomain/45255] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:35861 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:35861 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-13 03:16:54,096 INFO [Listener at localhost.localdomain/45255] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 03:16:54,096 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:54,096 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:54,097 INFO [Listener at localhost.localdomain/45255] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:34353, jenkins-hbase20.apache.org:36825, jenkins-hbase20.apache.org:41765, jenkins-hbase20.apache.org:46211], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-13 03:16:54,097 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-13 03:16:54,097 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 03:16:54,098 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, table=GrouptestNonExistentTableMove 2023-07-13 03:16:54,098 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-13 03:16:54,099 INFO [Listener at localhost.localdomain/45255] rsgroup.TestRSGroupsAdmin1(389): Moving table GrouptestNonExistentTableMove to default 2023-07-13 03:16:54,105 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, table=GrouptestNonExistentTableMove 2023-07-13 03:16:54,105 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-13 03:16:54,109 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:54,109 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:54,110 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-13 03:16:54,110 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 03:16:54,110 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-13 03:16:54,110 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-13 03:16:54,110 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-13 03:16:54,111 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup master 2023-07-13 03:16:54,114 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:54,114 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-13 03:16:54,115 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 03:16:54,118 INFO [Listener at localhost.localdomain/45255] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-13 03:16:54,119 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-13 03:16:54,121 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:54,121 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 03:16:54,122 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 03:16:54,123 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 03:16:54,125 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:54,126 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:54,127 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:35861] to rsgroup master 2023-07-13 03:16:54,128 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:35861 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 03:16:54,128 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] ipc.CallRunner(144): callId: 168 service: MasterService methodName: ExecMasterService size: 120 connection: 148.251.75.209:35152 deadline: 1689219414127, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:35861 is either offline or it does not exist. 2023-07-13 03:16:54,128 WARN [Listener at localhost.localdomain/45255] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:35861 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:35861 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-13 03:16:54,130 INFO [Listener at localhost.localdomain/45255] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 03:16:54,131 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:54,131 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:54,131 INFO [Listener at localhost.localdomain/45255] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:34353, jenkins-hbase20.apache.org:36825, jenkins-hbase20.apache.org:41765, jenkins-hbase20.apache.org:46211], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-13 03:16:54,132 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-13 03:16:54,132 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 03:16:54,152 INFO [Listener at localhost.localdomain/45255] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNonExistentTableMove Thread=572 (was 570) - Thread LEAK? -, OpenFileDescriptor=833 (was 833), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=483 (was 483), ProcessCount=170 (was 170), AvailableMemoryMB=3240 (was 3240) 2023-07-13 03:16:54,152 WARN [Listener at localhost.localdomain/45255] hbase.ResourceChecker(130): Thread=572 is superior to 500 2023-07-13 03:16:54,173 INFO [Listener at localhost.localdomain/45255] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testGroupInfoMultiAccessing Thread=572, OpenFileDescriptor=833, MaxFileDescriptor=60000, SystemLoadAverage=483, ProcessCount=170, AvailableMemoryMB=3240 2023-07-13 03:16:54,173 WARN [Listener at localhost.localdomain/45255] hbase.ResourceChecker(130): Thread=572 is superior to 500 2023-07-13 03:16:54,173 INFO [Listener at localhost.localdomain/45255] rsgroup.TestRSGroupsBase(132): testGroupInfoMultiAccessing 2023-07-13 03:16:54,177 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:54,177 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:54,178 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-13 03:16:54,178 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 03:16:54,178 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-13 03:16:54,179 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-13 03:16:54,179 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-13 03:16:54,180 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup master 2023-07-13 03:16:54,183 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:54,183 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-13 03:16:54,184 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 03:16:54,186 INFO [Listener at localhost.localdomain/45255] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-13 03:16:54,187 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-13 03:16:54,189 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:54,190 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 03:16:54,195 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 03:16:54,198 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 03:16:54,201 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:54,201 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:54,205 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:35861] to rsgroup master 2023-07-13 03:16:54,205 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:35861 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 03:16:54,205 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] ipc.CallRunner(144): callId: 196 service: MasterService methodName: ExecMasterService size: 120 connection: 148.251.75.209:35152 deadline: 1689219414205, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:35861 is either offline or it does not exist. 2023-07-13 03:16:54,206 WARN [Listener at localhost.localdomain/45255] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:35861 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:35861 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-13 03:16:54,208 INFO [Listener at localhost.localdomain/45255] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 03:16:54,209 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:54,209 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:54,209 INFO [Listener at localhost.localdomain/45255] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:34353, jenkins-hbase20.apache.org:36825, jenkins-hbase20.apache.org:41765, jenkins-hbase20.apache.org:46211], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-13 03:16:54,210 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-13 03:16:54,210 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 03:16:54,213 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:54,213 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:54,213 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-13 03:16:54,214 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 03:16:54,214 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-13 03:16:54,214 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-13 03:16:54,214 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-13 03:16:54,215 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup master 2023-07-13 03:16:54,217 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:54,218 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-13 03:16:54,241 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 03:16:54,244 INFO [Listener at localhost.localdomain/45255] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-13 03:16:54,244 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-13 03:16:54,246 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:54,246 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 03:16:54,247 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 03:16:54,248 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 03:16:54,250 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:54,250 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:54,252 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:35861] to rsgroup master 2023-07-13 03:16:54,252 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:35861 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 03:16:54,252 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] ipc.CallRunner(144): callId: 224 service: MasterService methodName: ExecMasterService size: 120 connection: 148.251.75.209:35152 deadline: 1689219414252, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:35861 is either offline or it does not exist. 2023-07-13 03:16:54,253 WARN [Listener at localhost.localdomain/45255] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:35861 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:35861 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-13 03:16:54,254 INFO [Listener at localhost.localdomain/45255] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 03:16:54,255 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:54,255 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:54,256 INFO [Listener at localhost.localdomain/45255] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:34353, jenkins-hbase20.apache.org:36825, jenkins-hbase20.apache.org:41765, jenkins-hbase20.apache.org:46211], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-13 03:16:54,256 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-13 03:16:54,257 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 03:16:54,274 INFO [Listener at localhost.localdomain/45255] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testGroupInfoMultiAccessing Thread=573 (was 572) - Thread LEAK? -, OpenFileDescriptor=833 (was 833), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=483 (was 483), ProcessCount=170 (was 170), AvailableMemoryMB=3240 (was 3240) 2023-07-13 03:16:54,274 WARN [Listener at localhost.localdomain/45255] hbase.ResourceChecker(130): Thread=573 is superior to 500 2023-07-13 03:16:54,296 INFO [Listener at localhost.localdomain/45255] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNamespaceConstraint Thread=573, OpenFileDescriptor=833, MaxFileDescriptor=60000, SystemLoadAverage=483, ProcessCount=170, AvailableMemoryMB=3238 2023-07-13 03:16:54,296 WARN [Listener at localhost.localdomain/45255] hbase.ResourceChecker(130): Thread=573 is superior to 500 2023-07-13 03:16:54,296 INFO [Listener at localhost.localdomain/45255] rsgroup.TestRSGroupsBase(132): testNamespaceConstraint 2023-07-13 03:16:54,299 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:54,299 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:54,300 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-13 03:16:54,300 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 03:16:54,300 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-13 03:16:54,301 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-13 03:16:54,301 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-13 03:16:54,301 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup master 2023-07-13 03:16:54,304 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:54,305 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-13 03:16:54,306 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 03:16:54,309 INFO [Listener at localhost.localdomain/45255] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-13 03:16:54,310 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-13 03:16:54,312 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:54,313 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 03:16:54,313 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 03:16:54,314 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 03:16:54,317 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:54,317 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:54,319 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:35861] to rsgroup master 2023-07-13 03:16:54,319 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:35861 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 03:16:54,319 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] ipc.CallRunner(144): callId: 252 service: MasterService methodName: ExecMasterService size: 120 connection: 148.251.75.209:35152 deadline: 1689219414319, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:35861 is either offline or it does not exist. 2023-07-13 03:16:54,319 WARN [Listener at localhost.localdomain/45255] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:35861 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:35861 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-13 03:16:54,321 INFO [Listener at localhost.localdomain/45255] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 03:16:54,321 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:54,322 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:54,322 INFO [Listener at localhost.localdomain/45255] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:34353, jenkins-hbase20.apache.org:36825, jenkins-hbase20.apache.org:41765, jenkins-hbase20.apache.org:46211], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-13 03:16:54,322 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-13 03:16:54,322 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 03:16:54,323 INFO [Listener at localhost.localdomain/45255] rsgroup.TestRSGroupsAdmin1(154): testNamespaceConstraint 2023-07-13 03:16:54,323 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup Group_foo 2023-07-13 03:16:54,325 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_foo 2023-07-13 03:16:54,328 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:54,329 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 03:16:54,329 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-13 03:16:54,330 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 03:16:54,332 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:54,332 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:54,334 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.HMaster$15(3014): Client=jenkins//148.251.75.209 creating {NAME => 'Group_foo', hbase.rsgroup.name => 'Group_foo'} 2023-07-13 03:16:54,335 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] procedure2.ProcedureExecutor(1029): Stored pid=20, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=Group_foo 2023-07-13 03:16:54,338 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-13 03:16:54,341 DEBUG [Listener at localhost.localdomain/45255-EventThread] zookeeper.ZKWatcher(600): master:35861-0x1008454d7cc0000, quorum=127.0.0.1:57116, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-13 03:16:54,343 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=20, state=SUCCESS; CreateNamespaceProcedure, namespace=Group_foo in 8 msec 2023-07-13 03:16:54,438 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-13 03:16:54,439 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup Group_foo 2023-07-13 03:16:54,441 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup Group_foo is referenced by namespace: Group_foo at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:504) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 03:16:54,441 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] ipc.CallRunner(144): callId: 268 service: MasterService methodName: ExecMasterService size: 91 connection: 148.251.75.209:35152 deadline: 1689219414439, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup Group_foo is referenced by namespace: Group_foo 2023-07-13 03:16:54,446 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.HMaster$16(3053): Client=jenkins//148.251.75.209 modify {NAME => 'Group_foo', hbase.rsgroup.name => 'Group_foo'} 2023-07-13 03:16:54,453 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] procedure2.ProcedureExecutor(1029): Stored pid=21, state=RUNNABLE:MODIFY_NAMESPACE_PREPARE; ModifyNamespaceProcedure, namespace=Group_foo 2023-07-13 03:16:54,464 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-13 03:16:54,466 DEBUG [Listener at localhost.localdomain/45255-EventThread] zookeeper.ZKWatcher(600): master:35861-0x1008454d7cc0000, quorum=127.0.0.1:57116, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/Group_foo 2023-07-13 03:16:54,467 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=21, state=SUCCESS; ModifyNamespaceProcedure, namespace=Group_foo in 19 msec 2023-07-13 03:16:54,566 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-13 03:16:54,568 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup Group_anotherGroup 2023-07-13 03:16:54,574 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_anotherGroup 2023-07-13 03:16:54,576 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:54,576 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_foo 2023-07-13 03:16:54,577 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 03:16:54,577 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-13 03:16:54,580 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 03:16:54,582 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:54,582 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:54,586 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.HMaster$17(3086): Client=jenkins//148.251.75.209 delete Group_foo 2023-07-13 03:16:54,587 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] procedure2.ProcedureExecutor(1029): Stored pid=22, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-13 03:16:54,589 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-13 03:16:54,593 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-13 03:16:54,594 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.MasterRpcServices(1230): Checking to see if procedure is done pid=22 2023-07-13 03:16:54,595 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-13 03:16:54,596 DEBUG [Listener at localhost.localdomain/45255-EventThread] zookeeper.ZKWatcher(600): master:35861-0x1008454d7cc0000, quorum=127.0.0.1:57116, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/Group_foo 2023-07-13 03:16:54,596 DEBUG [Listener at localhost.localdomain/45255-EventThread] zookeeper.ZKWatcher(600): master:35861-0x1008454d7cc0000, quorum=127.0.0.1:57116, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-13 03:16:54,597 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-13 03:16:54,599 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-13 03:16:54,600 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=22, state=SUCCESS; DeleteNamespaceProcedure, namespace=Group_foo in 13 msec 2023-07-13 03:16:54,696 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.MasterRpcServices(1230): Checking to see if procedure is done pid=22 2023-07-13 03:16:54,697 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup Group_foo 2023-07-13 03:16:54,702 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_anotherGroup 2023-07-13 03:16:54,703 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:54,703 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 03:16:54,704 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-13 03:16:54,705 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 03:16:54,707 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:54,707 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:54,709 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Region server group foo does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint.preCreateNamespace(RSGroupAdminEndpoint.java:591) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$1.call(MasterCoprocessorHost.java:222) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$1.call(MasterCoprocessorHost.java:219) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost$ObserverOperationWithoutResult.callObserver(CoprocessorHost.java:558) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:631) at org.apache.hadoop.hbase.master.MasterCoprocessorHost.preCreateNamespace(MasterCoprocessorHost.java:219) at org.apache.hadoop.hbase.master.HMaster$15.run(HMaster.java:3010) at org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:132) at org.apache.hadoop.hbase.master.HMaster.createNamespace(HMaster.java:3007) at org.apache.hadoop.hbase.master.MasterRpcServices.createNamespace(MasterRpcServices.java:684) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 03:16:54,710 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] ipc.CallRunner(144): callId: 290 service: MasterService methodName: CreateNamespace size: 70 connection: 148.251.75.209:35152 deadline: 1689218274709, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Region server group foo does not exist. 2023-07-13 03:16:54,713 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:54,713 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:54,714 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-13 03:16:54,714 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 03:16:54,714 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-13 03:16:54,715 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-13 03:16:54,715 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-13 03:16:54,716 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup Group_anotherGroup 2023-07-13 03:16:54,719 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:54,719 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 03:16:54,719 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-13 03:16:54,720 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 03:16:54,721 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//148.251.75.209 move tables [] to rsgroup default 2023-07-13 03:16:54,721 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-13 03:16:54,721 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveTables 2023-07-13 03:16:54,722 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [] to rsgroup default 2023-07-13 03:16:54,722 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.MoveServers 2023-07-13 03:16:54,723 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//148.251.75.209 remove rsgroup master 2023-07-13 03:16:54,726 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:54,726 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-13 03:16:54,728 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-13 03:16:54,732 INFO [Listener at localhost.localdomain/45255] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-13 03:16:54,732 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//148.251.75.209 add rsgroup master 2023-07-13 03:16:54,734 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-13 03:16:54,735 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-13 03:16:54,737 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-13 03:16:54,738 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.AddRSGroup 2023-07-13 03:16:54,740 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:54,740 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:54,743 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//148.251.75.209 move servers [jenkins-hbase20.apache.org:35861] to rsgroup master 2023-07-13 03:16:54,743 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:35861 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-13 03:16:54,743 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] ipc.CallRunner(144): callId: 320 service: MasterService methodName: ExecMasterService size: 120 connection: 148.251.75.209:35152 deadline: 1689219414742, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:35861 is either offline or it does not exist. 2023-07-13 03:16:54,744 WARN [Listener at localhost.localdomain/45255] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:35861 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor56.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase20.apache.org:35861 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-13 03:16:54,745 INFO [Listener at localhost.localdomain/45255] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-13 03:16:54,746 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//148.251.75.209 list rsgroup 2023-07-13 03:16:54,746 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-13 03:16:54,747 INFO [Listener at localhost.localdomain/45255] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase20.apache.org:34353, jenkins-hbase20.apache.org:36825, jenkins-hbase20.apache.org:41765, jenkins-hbase20.apache.org:46211], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-13 03:16:54,747 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//148.251.75.209 initiates rsgroup info retrieval, group=default 2023-07-13 03:16:54,747 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35861] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /148.251.75.209) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-13 03:16:54,765 INFO [Listener at localhost.localdomain/45255] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNamespaceConstraint Thread=573 (was 573), OpenFileDescriptor=833 (was 833), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=483 (was 483), ProcessCount=170 (was 170), AvailableMemoryMB=3236 (was 3238) 2023-07-13 03:16:54,765 WARN [Listener at localhost.localdomain/45255] hbase.ResourceChecker(130): Thread=573 is superior to 500 2023-07-13 03:16:54,765 INFO [Listener at localhost.localdomain/45255] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-13 03:16:54,765 INFO [Listener at localhost.localdomain/45255] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-13 03:16:54,765 DEBUG [Listener at localhost.localdomain/45255] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x0e1cdbb7 to 127.0.0.1:57116 2023-07-13 03:16:54,765 DEBUG [Listener at localhost.localdomain/45255] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 03:16:54,765 DEBUG [Listener at localhost.localdomain/45255] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-13 03:16:54,765 DEBUG [Listener at localhost.localdomain/45255] util.JVMClusterUtil(257): Found active master hash=921544213, stopped=false 2023-07-13 03:16:54,766 DEBUG [Listener at localhost.localdomain/45255] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-13 03:16:54,766 DEBUG [Listener at localhost.localdomain/45255] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-13 03:16:54,766 INFO [Listener at localhost.localdomain/45255] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase20.apache.org,35861,1689218210690 2023-07-13 03:16:54,767 DEBUG [Listener at localhost.localdomain/45255-EventThread] zookeeper.ZKWatcher(600): master:35861-0x1008454d7cc0000, quorum=127.0.0.1:57116, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-13 03:16:54,767 DEBUG [Listener at localhost.localdomain/45255-EventThread] zookeeper.ZKWatcher(600): regionserver:41765-0x1008454d7cc000b, quorum=127.0.0.1:57116, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-13 03:16:54,767 DEBUG [Listener at localhost.localdomain/45255-EventThread] zookeeper.ZKWatcher(600): master:35861-0x1008454d7cc0000, quorum=127.0.0.1:57116, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 03:16:54,767 INFO [Listener at localhost.localdomain/45255] procedure2.ProcedureExecutor(629): Stopping 2023-07-13 03:16:54,767 DEBUG [Listener at localhost.localdomain/45255-EventThread] zookeeper.ZKWatcher(600): regionserver:46211-0x1008454d7cc0001, quorum=127.0.0.1:57116, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-13 03:16:54,767 DEBUG [Listener at localhost.localdomain/45255-EventThread] zookeeper.ZKWatcher(600): regionserver:34353-0x1008454d7cc0003, quorum=127.0.0.1:57116, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-13 03:16:54,767 DEBUG [Listener at localhost.localdomain/45255-EventThread] zookeeper.ZKWatcher(600): regionserver:36825-0x1008454d7cc0002, quorum=127.0.0.1:57116, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-13 03:16:54,767 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:35861-0x1008454d7cc0000, quorum=127.0.0.1:57116, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 03:16:54,767 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:41765-0x1008454d7cc000b, quorum=127.0.0.1:57116, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 03:16:54,767 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:46211-0x1008454d7cc0001, quorum=127.0.0.1:57116, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 03:16:54,767 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:34353-0x1008454d7cc0003, quorum=127.0.0.1:57116, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 03:16:54,768 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:36825-0x1008454d7cc0002, quorum=127.0.0.1:57116, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-13 03:16:54,768 DEBUG [Listener at localhost.localdomain/45255] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x11e8d79e to 127.0.0.1:57116 2023-07-13 03:16:54,768 DEBUG [Listener at localhost.localdomain/45255] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 03:16:54,768 INFO [Listener at localhost.localdomain/45255] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase20.apache.org,46211,1689218210846' ***** 2023-07-13 03:16:54,768 INFO [Listener at localhost.localdomain/45255] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-13 03:16:54,768 INFO [RS:0;jenkins-hbase20:46211] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-13 03:16:54,769 INFO [Listener at localhost.localdomain/45255] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase20.apache.org,36825,1689218210969' ***** 2023-07-13 03:16:54,769 INFO [Listener at localhost.localdomain/45255] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-13 03:16:54,770 INFO [Listener at localhost.localdomain/45255] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase20.apache.org,34353,1689218211105' ***** 2023-07-13 03:16:54,770 INFO [RS:1;jenkins-hbase20:36825] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-13 03:16:54,770 INFO [Listener at localhost.localdomain/45255] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-13 03:16:54,770 INFO [Listener at localhost.localdomain/45255] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase20.apache.org,41765,1689218212508' ***** 2023-07-13 03:16:54,770 INFO [RS:2;jenkins-hbase20:34353] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-13 03:16:54,772 INFO [Listener at localhost.localdomain/45255] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-13 03:16:54,774 INFO [RS:3;jenkins-hbase20:41765] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-13 03:16:54,778 INFO [RS:0;jenkins-hbase20:46211] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@1f800616{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-13 03:16:54,778 INFO [RS:2;jenkins-hbase20:34353] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@4398561f{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-13 03:16:54,778 INFO [RS:1;jenkins-hbase20:36825] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@1dbe62ba{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-13 03:16:54,779 INFO [RS:3;jenkins-hbase20:41765] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@1940e767{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-13 03:16:54,779 INFO [RS:0;jenkins-hbase20:46211] server.AbstractConnector(383): Stopped ServerConnector@5406abae{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-13 03:16:54,780 INFO [RS:3;jenkins-hbase20:41765] server.AbstractConnector(383): Stopped ServerConnector@1bce2e7b{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-13 03:16:54,780 INFO [RS:0;jenkins-hbase20:46211] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-13 03:16:54,780 INFO [RS:2;jenkins-hbase20:34353] server.AbstractConnector(383): Stopped ServerConnector@40ba18c9{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-13 03:16:54,781 INFO [RS:2;jenkins-hbase20:34353] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-13 03:16:54,781 INFO [RS:0;jenkins-hbase20:46211] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@79ad1c66{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-13 03:16:54,780 INFO [RS:1;jenkins-hbase20:36825] server.AbstractConnector(383): Stopped ServerConnector@5579b584{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-13 03:16:54,781 INFO [RS:2;jenkins-hbase20:34353] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@71b1b09d{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-13 03:16:54,780 INFO [RS:3;jenkins-hbase20:41765] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-13 03:16:54,782 INFO [RS:0;jenkins-hbase20:46211] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6267750{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f27a0f06-9cf1-37de-e2c0-e7e0a0f251a8/hadoop.log.dir/,STOPPED} 2023-07-13 03:16:54,782 INFO [RS:1;jenkins-hbase20:36825] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-13 03:16:54,783 INFO [RS:2;jenkins-hbase20:34353] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@76205173{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f27a0f06-9cf1-37de-e2c0-e7e0a0f251a8/hadoop.log.dir/,STOPPED} 2023-07-13 03:16:54,784 INFO [RS:3;jenkins-hbase20:41765] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@30c62731{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-13 03:16:54,784 INFO [RS:1;jenkins-hbase20:36825] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7c0bbd48{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-13 03:16:54,785 INFO [RS:3;jenkins-hbase20:41765] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@558abd96{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f27a0f06-9cf1-37de-e2c0-e7e0a0f251a8/hadoop.log.dir/,STOPPED} 2023-07-13 03:16:54,786 INFO [RS:1;jenkins-hbase20:36825] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6da034c6{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f27a0f06-9cf1-37de-e2c0-e7e0a0f251a8/hadoop.log.dir/,STOPPED} 2023-07-13 03:16:54,786 INFO [RS:2;jenkins-hbase20:34353] regionserver.HeapMemoryManager(220): Stopping 2023-07-13 03:16:54,785 INFO [RS:0;jenkins-hbase20:46211] regionserver.HeapMemoryManager(220): Stopping 2023-07-13 03:16:54,786 INFO [RS:2;jenkins-hbase20:34353] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-13 03:16:54,786 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-13 03:16:54,786 INFO [RS:1;jenkins-hbase20:36825] regionserver.HeapMemoryManager(220): Stopping 2023-07-13 03:16:54,786 INFO [RS:0;jenkins-hbase20:46211] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-13 03:16:54,786 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-13 03:16:54,786 INFO [RS:0;jenkins-hbase20:46211] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-13 03:16:54,786 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-13 03:16:54,786 INFO [RS:1;jenkins-hbase20:36825] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-13 03:16:54,786 INFO [RS:3;jenkins-hbase20:41765] regionserver.HeapMemoryManager(220): Stopping 2023-07-13 03:16:54,786 INFO [RS:2;jenkins-hbase20:34353] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-13 03:16:54,787 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-13 03:16:54,787 INFO [RS:3;jenkins-hbase20:41765] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-13 03:16:54,787 INFO [RS:3;jenkins-hbase20:41765] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-13 03:16:54,787 INFO [RS:1;jenkins-hbase20:36825] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-13 03:16:54,787 INFO [RS:0;jenkins-hbase20:46211] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,46211,1689218210846 2023-07-13 03:16:54,787 DEBUG [RS:0;jenkins-hbase20:46211] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x464877d5 to 127.0.0.1:57116 2023-07-13 03:16:54,787 DEBUG [RS:0;jenkins-hbase20:46211] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 03:16:54,787 INFO [RS:0;jenkins-hbase20:46211] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,46211,1689218210846; all regions closed. 2023-07-13 03:16:54,787 INFO [RS:1;jenkins-hbase20:36825] regionserver.HRegionServer(3305): Received CLOSE for c07710fa1db4342318e9b1de545988c8 2023-07-13 03:16:54,787 INFO [RS:3;jenkins-hbase20:41765] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,41765,1689218212508 2023-07-13 03:16:54,787 INFO [RS:1;jenkins-hbase20:36825] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,36825,1689218210969 2023-07-13 03:16:54,788 DEBUG [RS:1;jenkins-hbase20:36825] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x7a98c8bc to 127.0.0.1:57116 2023-07-13 03:16:54,788 DEBUG [RS:1;jenkins-hbase20:36825] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 03:16:54,788 INFO [RS:1;jenkins-hbase20:36825] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-13 03:16:54,788 DEBUG [RS:1;jenkins-hbase20:36825] regionserver.HRegionServer(1478): Online Regions={c07710fa1db4342318e9b1de545988c8=hbase:namespace,,1689218211863.c07710fa1db4342318e9b1de545988c8.} 2023-07-13 03:16:54,788 DEBUG [RS:1;jenkins-hbase20:36825] regionserver.HRegionServer(1504): Waiting on c07710fa1db4342318e9b1de545988c8 2023-07-13 03:16:54,787 INFO [RS:2;jenkins-hbase20:34353] regionserver.HRegionServer(3305): Received CLOSE for c7783c875957060e5428a4304c2bb71d 2023-07-13 03:16:54,787 DEBUG [RS:3;jenkins-hbase20:41765] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x128a311c to 127.0.0.1:57116 2023-07-13 03:16:54,788 DEBUG [RS:3;jenkins-hbase20:41765] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 03:16:54,788 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing c07710fa1db4342318e9b1de545988c8, disabling compactions & flushes 2023-07-13 03:16:54,788 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689218211863.c07710fa1db4342318e9b1de545988c8. 2023-07-13 03:16:54,788 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689218211863.c07710fa1db4342318e9b1de545988c8. 2023-07-13 03:16:54,788 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689218211863.c07710fa1db4342318e9b1de545988c8. after waiting 0 ms 2023-07-13 03:16:54,788 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689218211863.c07710fa1db4342318e9b1de545988c8. 2023-07-13 03:16:54,788 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing c07710fa1db4342318e9b1de545988c8 1/1 column families, dataSize=267 B heapSize=904 B 2023-07-13 03:16:54,788 INFO [RS:3;jenkins-hbase20:41765] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,41765,1689218212508; all regions closed. 2023-07-13 03:16:54,789 INFO [RS:2;jenkins-hbase20:34353] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,34353,1689218211105 2023-07-13 03:16:54,790 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing c7783c875957060e5428a4304c2bb71d, disabling compactions & flushes 2023-07-13 03:16:54,790 DEBUG [RS:2;jenkins-hbase20:34353] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x6a629e53 to 127.0.0.1:57116 2023-07-13 03:16:54,790 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689218212020.c7783c875957060e5428a4304c2bb71d. 2023-07-13 03:16:54,790 DEBUG [RS:2;jenkins-hbase20:34353] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 03:16:54,790 INFO [RS:2;jenkins-hbase20:34353] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-13 03:16:54,790 INFO [RS:2;jenkins-hbase20:34353] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-13 03:16:54,790 INFO [RS:2;jenkins-hbase20:34353] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-13 03:16:54,790 INFO [RS:2;jenkins-hbase20:34353] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-13 03:16:54,790 INFO [regionserver/jenkins-hbase20:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-13 03:16:54,790 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689218212020.c7783c875957060e5428a4304c2bb71d. 2023-07-13 03:16:54,790 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689218212020.c7783c875957060e5428a4304c2bb71d. after waiting 0 ms 2023-07-13 03:16:54,790 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689218212020.c7783c875957060e5428a4304c2bb71d. 2023-07-13 03:16:54,790 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing c7783c875957060e5428a4304c2bb71d 1/1 column families, dataSize=6.53 KB heapSize=10.82 KB 2023-07-13 03:16:54,792 INFO [RS:2;jenkins-hbase20:34353] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-07-13 03:16:54,792 DEBUG [RS:2;jenkins-hbase20:34353] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740, c7783c875957060e5428a4304c2bb71d=hbase:rsgroup,,1689218212020.c7783c875957060e5428a4304c2bb71d.} 2023-07-13 03:16:54,792 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-13 03:16:54,792 DEBUG [RS:2;jenkins-hbase20:34353] regionserver.HRegionServer(1504): Waiting on 1588230740, c7783c875957060e5428a4304c2bb71d 2023-07-13 03:16:54,792 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-13 03:16:54,793 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-13 03:16:54,793 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-13 03:16:54,793 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-13 03:16:54,793 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=4.51 KB heapSize=8.82 KB 2023-07-13 03:16:54,797 DEBUG [RS:0;jenkins-hbase20:46211] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/oldWALs 2023-07-13 03:16:54,797 INFO [RS:0;jenkins-hbase20:46211] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase20.apache.org%2C46211%2C1689218210846:(num 1689218211659) 2023-07-13 03:16:54,797 DEBUG [RS:0;jenkins-hbase20:46211] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 03:16:54,798 INFO [RS:0;jenkins-hbase20:46211] regionserver.LeaseManager(133): Closed leases 2023-07-13 03:16:54,798 DEBUG [RS:3;jenkins-hbase20:41765] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/oldWALs 2023-07-13 03:16:54,798 INFO [RS:3;jenkins-hbase20:41765] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase20.apache.org%2C41765%2C1689218212508:(num 1689218212817) 2023-07-13 03:16:54,798 DEBUG [RS:3;jenkins-hbase20:41765] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 03:16:54,798 INFO [RS:3;jenkins-hbase20:41765] regionserver.LeaseManager(133): Closed leases 2023-07-13 03:16:54,805 INFO [RS:0;jenkins-hbase20:46211] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase20:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-13 03:16:54,805 INFO [RS:3;jenkins-hbase20:41765] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase20:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-13 03:16:54,805 INFO [RS:0;jenkins-hbase20:46211] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-13 03:16:54,805 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-13 03:16:54,805 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-13 03:16:54,805 INFO [RS:3;jenkins-hbase20:41765] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-13 03:16:54,805 INFO [RS:0;jenkins-hbase20:46211] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-13 03:16:54,806 INFO [RS:3;jenkins-hbase20:41765] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-13 03:16:54,806 INFO [RS:0;jenkins-hbase20:46211] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-13 03:16:54,806 INFO [RS:3;jenkins-hbase20:41765] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-13 03:16:54,807 INFO [RS:0;jenkins-hbase20:46211] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:46211 2023-07-13 03:16:54,808 INFO [RS:3;jenkins-hbase20:41765] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:41765 2023-07-13 03:16:54,809 DEBUG [Listener at localhost.localdomain/45255-EventThread] zookeeper.ZKWatcher(600): regionserver:34353-0x1008454d7cc0003, quorum=127.0.0.1:57116, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,46211,1689218210846 2023-07-13 03:16:54,809 DEBUG [Listener at localhost.localdomain/45255-EventThread] zookeeper.ZKWatcher(600): regionserver:46211-0x1008454d7cc0001, quorum=127.0.0.1:57116, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,46211,1689218210846 2023-07-13 03:16:54,809 DEBUG [Listener at localhost.localdomain/45255-EventThread] zookeeper.ZKWatcher(600): regionserver:46211-0x1008454d7cc0001, quorum=127.0.0.1:57116, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 03:16:54,809 DEBUG [Listener at localhost.localdomain/45255-EventThread] zookeeper.ZKWatcher(600): regionserver:41765-0x1008454d7cc000b, quorum=127.0.0.1:57116, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,46211,1689218210846 2023-07-13 03:16:54,809 DEBUG [Listener at localhost.localdomain/45255-EventThread] zookeeper.ZKWatcher(600): regionserver:41765-0x1008454d7cc000b, quorum=127.0.0.1:57116, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 03:16:54,809 DEBUG [Listener at localhost.localdomain/45255-EventThread] zookeeper.ZKWatcher(600): master:35861-0x1008454d7cc0000, quorum=127.0.0.1:57116, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 03:16:54,810 DEBUG [Listener at localhost.localdomain/45255-EventThread] zookeeper.ZKWatcher(600): regionserver:36825-0x1008454d7cc0002, quorum=127.0.0.1:57116, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,46211,1689218210846 2023-07-13 03:16:54,810 DEBUG [Listener at localhost.localdomain/45255-EventThread] zookeeper.ZKWatcher(600): regionserver:36825-0x1008454d7cc0002, quorum=127.0.0.1:57116, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 03:16:54,810 DEBUG [Listener at localhost.localdomain/45255-EventThread] zookeeper.ZKWatcher(600): regionserver:36825-0x1008454d7cc0002, quorum=127.0.0.1:57116, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,41765,1689218212508 2023-07-13 03:16:54,810 DEBUG [Listener at localhost.localdomain/45255-EventThread] zookeeper.ZKWatcher(600): regionserver:34353-0x1008454d7cc0003, quorum=127.0.0.1:57116, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 03:16:54,810 DEBUG [Listener at localhost.localdomain/45255-EventThread] zookeeper.ZKWatcher(600): regionserver:46211-0x1008454d7cc0001, quorum=127.0.0.1:57116, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,41765,1689218212508 2023-07-13 03:16:54,810 DEBUG [Listener at localhost.localdomain/45255-EventThread] zookeeper.ZKWatcher(600): regionserver:34353-0x1008454d7cc0003, quorum=127.0.0.1:57116, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,41765,1689218212508 2023-07-13 03:16:54,810 DEBUG [Listener at localhost.localdomain/45255-EventThread] zookeeper.ZKWatcher(600): regionserver:41765-0x1008454d7cc000b, quorum=127.0.0.1:57116, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,41765,1689218212508 2023-07-13 03:16:54,821 INFO [regionserver/jenkins-hbase20:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-13 03:16:54,824 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=267 B at sequenceid=9 (bloomFilter=true), to=hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/data/hbase/namespace/c07710fa1db4342318e9b1de545988c8/.tmp/info/3a9d444517034ed2a560880660e09439 2023-07-13 03:16:54,825 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=6.53 KB at sequenceid=29 (bloomFilter=true), to=hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/data/hbase/rsgroup/c7783c875957060e5428a4304c2bb71d/.tmp/m/11dc48a6926641c6a77cd17480cefaf9 2023-07-13 03:16:54,826 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=4.01 KB at sequenceid=26 (bloomFilter=false), to=hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/data/hbase/meta/1588230740/.tmp/info/e421538f26f34c209d50aa7e639f4c32 2023-07-13 03:16:54,832 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 3a9d444517034ed2a560880660e09439 2023-07-13 03:16:54,834 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/data/hbase/namespace/c07710fa1db4342318e9b1de545988c8/.tmp/info/3a9d444517034ed2a560880660e09439 as hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/data/hbase/namespace/c07710fa1db4342318e9b1de545988c8/info/3a9d444517034ed2a560880660e09439 2023-07-13 03:16:54,835 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 11dc48a6926641c6a77cd17480cefaf9 2023-07-13 03:16:54,836 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/data/hbase/rsgroup/c7783c875957060e5428a4304c2bb71d/.tmp/m/11dc48a6926641c6a77cd17480cefaf9 as hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/data/hbase/rsgroup/c7783c875957060e5428a4304c2bb71d/m/11dc48a6926641c6a77cd17480cefaf9 2023-07-13 03:16:54,839 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for e421538f26f34c209d50aa7e639f4c32 2023-07-13 03:16:54,839 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 3a9d444517034ed2a560880660e09439 2023-07-13 03:16:54,839 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/data/hbase/namespace/c07710fa1db4342318e9b1de545988c8/info/3a9d444517034ed2a560880660e09439, entries=3, sequenceid=9, filesize=5.0 K 2023-07-13 03:16:54,840 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~267 B/267, heapSize ~888 B/888, currentSize=0 B/0 for c07710fa1db4342318e9b1de545988c8 in 52ms, sequenceid=9, compaction requested=false 2023-07-13 03:16:54,840 INFO [regionserver/jenkins-hbase20:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-13 03:16:54,841 INFO [regionserver/jenkins-hbase20:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-13 03:16:54,844 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 11dc48a6926641c6a77cd17480cefaf9 2023-07-13 03:16:54,844 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/data/hbase/rsgroup/c7783c875957060e5428a4304c2bb71d/m/11dc48a6926641c6a77cd17480cefaf9, entries=12, sequenceid=29, filesize=5.5 K 2023-07-13 03:16:54,845 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~6.53 KB/6685, heapSize ~10.80 KB/11064, currentSize=0 B/0 for c7783c875957060e5428a4304c2bb71d in 55ms, sequenceid=29, compaction requested=false 2023-07-13 03:16:54,862 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/data/hbase/rsgroup/c7783c875957060e5428a4304c2bb71d/recovered.edits/32.seqid, newMaxSeqId=32, maxSeqId=1 2023-07-13 03:16:54,862 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/data/hbase/namespace/c07710fa1db4342318e9b1de545988c8/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=1 2023-07-13 03:16:54,863 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-13 03:16:54,863 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689218212020.c7783c875957060e5428a4304c2bb71d. 2023-07-13 03:16:54,863 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for c7783c875957060e5428a4304c2bb71d: 2023-07-13 03:16:54,863 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689218212020.c7783c875957060e5428a4304c2bb71d. 2023-07-13 03:16:54,864 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689218211863.c07710fa1db4342318e9b1de545988c8. 2023-07-13 03:16:54,864 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for c07710fa1db4342318e9b1de545988c8: 2023-07-13 03:16:54,864 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689218211863.c07710fa1db4342318e9b1de545988c8. 2023-07-13 03:16:54,873 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=82 B at sequenceid=26 (bloomFilter=false), to=hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/data/hbase/meta/1588230740/.tmp/rep_barrier/f6be7ad76c094bc889adfa14c0c62c0c 2023-07-13 03:16:54,878 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for f6be7ad76c094bc889adfa14c0c62c0c 2023-07-13 03:16:54,891 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=428 B at sequenceid=26 (bloomFilter=false), to=hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/data/hbase/meta/1588230740/.tmp/table/281b704fae31498f9848f78e2b7b4adb 2023-07-13 03:16:54,895 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 281b704fae31498f9848f78e2b7b4adb 2023-07-13 03:16:54,896 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/data/hbase/meta/1588230740/.tmp/info/e421538f26f34c209d50aa7e639f4c32 as hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/data/hbase/meta/1588230740/info/e421538f26f34c209d50aa7e639f4c32 2023-07-13 03:16:54,902 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for e421538f26f34c209d50aa7e639f4c32 2023-07-13 03:16:54,902 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/data/hbase/meta/1588230740/info/e421538f26f34c209d50aa7e639f4c32, entries=22, sequenceid=26, filesize=7.3 K 2023-07-13 03:16:54,903 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/data/hbase/meta/1588230740/.tmp/rep_barrier/f6be7ad76c094bc889adfa14c0c62c0c as hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/data/hbase/meta/1588230740/rep_barrier/f6be7ad76c094bc889adfa14c0c62c0c 2023-07-13 03:16:54,908 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for f6be7ad76c094bc889adfa14c0c62c0c 2023-07-13 03:16:54,908 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/data/hbase/meta/1588230740/rep_barrier/f6be7ad76c094bc889adfa14c0c62c0c, entries=1, sequenceid=26, filesize=4.9 K 2023-07-13 03:16:54,909 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/data/hbase/meta/1588230740/.tmp/table/281b704fae31498f9848f78e2b7b4adb as hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/data/hbase/meta/1588230740/table/281b704fae31498f9848f78e2b7b4adb 2023-07-13 03:16:54,911 DEBUG [Listener at localhost.localdomain/45255-EventThread] zookeeper.ZKWatcher(600): regionserver:41765-0x1008454d7cc000b, quorum=127.0.0.1:57116, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 03:16:54,911 INFO [RS:3;jenkins-hbase20:41765] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,41765,1689218212508; zookeeper connection closed. 2023-07-13 03:16:54,911 DEBUG [Listener at localhost.localdomain/45255-EventThread] zookeeper.ZKWatcher(600): regionserver:41765-0x1008454d7cc000b, quorum=127.0.0.1:57116, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 03:16:54,911 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase20.apache.org,41765,1689218212508] 2023-07-13 03:16:54,911 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@49c9d573] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@49c9d573 2023-07-13 03:16:54,911 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase20.apache.org,41765,1689218212508; numProcessing=1 2023-07-13 03:16:54,912 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase20.apache.org,41765,1689218212508 already deleted, retry=false 2023-07-13 03:16:54,912 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase20.apache.org,41765,1689218212508 expired; onlineServers=3 2023-07-13 03:16:54,912 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase20.apache.org,46211,1689218210846] 2023-07-13 03:16:54,912 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase20.apache.org,46211,1689218210846; numProcessing=2 2023-07-13 03:16:54,912 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase20.apache.org,46211,1689218210846 already deleted, retry=false 2023-07-13 03:16:54,912 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase20.apache.org,46211,1689218210846 expired; onlineServers=2 2023-07-13 03:16:54,914 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 281b704fae31498f9848f78e2b7b4adb 2023-07-13 03:16:54,914 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HStore(1080): Added hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/data/hbase/meta/1588230740/table/281b704fae31498f9848f78e2b7b4adb, entries=6, sequenceid=26, filesize=5.1 K 2023-07-13 03:16:54,915 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~4.51 KB/4621, heapSize ~8.77 KB/8984, currentSize=0 B/0 for 1588230740 in 122ms, sequenceid=26, compaction requested=false 2023-07-13 03:16:54,927 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/data/hbase/meta/1588230740/recovered.edits/29.seqid, newMaxSeqId=29, maxSeqId=1 2023-07-13 03:16:54,927 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-13 03:16:54,927 INFO [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-13 03:16:54,927 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-13 03:16:54,927 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase20:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-13 03:16:54,966 DEBUG [Listener at localhost.localdomain/45255-EventThread] zookeeper.ZKWatcher(600): regionserver:46211-0x1008454d7cc0001, quorum=127.0.0.1:57116, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 03:16:54,967 DEBUG [Listener at localhost.localdomain/45255-EventThread] zookeeper.ZKWatcher(600): regionserver:46211-0x1008454d7cc0001, quorum=127.0.0.1:57116, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 03:16:54,966 INFO [RS:0;jenkins-hbase20:46211] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,46211,1689218210846; zookeeper connection closed. 2023-07-13 03:16:54,967 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@1109a12d] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@1109a12d 2023-07-13 03:16:54,988 INFO [RS:1;jenkins-hbase20:36825] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,36825,1689218210969; all regions closed. 2023-07-13 03:16:54,993 INFO [RS:2;jenkins-hbase20:34353] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,34353,1689218211105; all regions closed. 2023-07-13 03:16:54,996 DEBUG [RS:1;jenkins-hbase20:36825] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/oldWALs 2023-07-13 03:16:54,996 INFO [RS:1;jenkins-hbase20:36825] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase20.apache.org%2C36825%2C1689218210969:(num 1689218211669) 2023-07-13 03:16:54,996 DEBUG [RS:1;jenkins-hbase20:36825] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 03:16:54,997 INFO [RS:1;jenkins-hbase20:36825] regionserver.LeaseManager(133): Closed leases 2023-07-13 03:16:54,997 INFO [RS:1;jenkins-hbase20:36825] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase20:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-13 03:16:54,997 INFO [RS:1;jenkins-hbase20:36825] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-13 03:16:54,997 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-13 03:16:54,997 INFO [RS:1;jenkins-hbase20:36825] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-13 03:16:55,000 INFO [RS:1;jenkins-hbase20:36825] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-13 03:16:55,003 INFO [RS:1;jenkins-hbase20:36825] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:36825 2023-07-13 03:16:55,006 DEBUG [Listener at localhost.localdomain/45255-EventThread] zookeeper.ZKWatcher(600): regionserver:36825-0x1008454d7cc0002, quorum=127.0.0.1:57116, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,36825,1689218210969 2023-07-13 03:16:55,006 DEBUG [RS:2;jenkins-hbase20:34353] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/oldWALs 2023-07-13 03:16:55,007 INFO [RS:2;jenkins-hbase20:34353] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase20.apache.org%2C34353%2C1689218211105.meta:.meta(num 1689218211803) 2023-07-13 03:16:55,007 DEBUG [Listener at localhost.localdomain/45255-EventThread] zookeeper.ZKWatcher(600): regionserver:34353-0x1008454d7cc0003, quorum=127.0.0.1:57116, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,36825,1689218210969 2023-07-13 03:16:55,006 DEBUG [Listener at localhost.localdomain/45255-EventThread] zookeeper.ZKWatcher(600): master:35861-0x1008454d7cc0000, quorum=127.0.0.1:57116, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 03:16:55,008 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase20.apache.org,36825,1689218210969] 2023-07-13 03:16:55,008 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase20.apache.org,36825,1689218210969; numProcessing=3 2023-07-13 03:16:55,008 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase20.apache.org,36825,1689218210969 already deleted, retry=false 2023-07-13 03:16:55,008 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase20.apache.org,36825,1689218210969 expired; onlineServers=1 2023-07-13 03:16:55,017 DEBUG [RS:2;jenkins-hbase20:34353] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/oldWALs 2023-07-13 03:16:55,017 INFO [RS:2;jenkins-hbase20:34353] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase20.apache.org%2C34353%2C1689218211105:(num 1689218211649) 2023-07-13 03:16:55,017 DEBUG [RS:2;jenkins-hbase20:34353] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 03:16:55,017 INFO [RS:2;jenkins-hbase20:34353] regionserver.LeaseManager(133): Closed leases 2023-07-13 03:16:55,017 INFO [RS:2;jenkins-hbase20:34353] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase20:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-13 03:16:55,017 INFO [regionserver/jenkins-hbase20:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-13 03:16:55,019 INFO [RS:2;jenkins-hbase20:34353] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:34353 2023-07-13 03:16:55,021 DEBUG [Listener at localhost.localdomain/45255-EventThread] zookeeper.ZKWatcher(600): regionserver:34353-0x1008454d7cc0003, quorum=127.0.0.1:57116, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase20.apache.org,34353,1689218211105 2023-07-13 03:16:55,021 DEBUG [Listener at localhost.localdomain/45255-EventThread] zookeeper.ZKWatcher(600): master:35861-0x1008454d7cc0000, quorum=127.0.0.1:57116, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-13 03:16:55,021 ERROR [Listener at localhost.localdomain/45255-EventThread] zookeeper.ClientCnxn$EventThread(537): Error while calling watcher java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@3b98b5c rejected from java.util.concurrent.ThreadPoolExecutor@7860ff3[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 9] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112) at java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:678) at org.apache.hadoop.hbase.zookeeper.ZKWatcher.process(ZKWatcher.java:603) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:535) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510) 2023-07-13 03:16:55,021 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase20.apache.org,34353,1689218211105] 2023-07-13 03:16:55,021 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase20.apache.org,34353,1689218211105; numProcessing=4 2023-07-13 03:16:55,022 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase20.apache.org,34353,1689218211105 already deleted, retry=false 2023-07-13 03:16:55,022 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase20.apache.org,34353,1689218211105 expired; onlineServers=0 2023-07-13 03:16:55,022 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase20.apache.org,35861,1689218210690' ***** 2023-07-13 03:16:55,022 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-13 03:16:55,023 DEBUG [M:0;jenkins-hbase20:35861] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@32d23410, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase20.apache.org/148.251.75.209:0 2023-07-13 03:16:55,023 INFO [M:0;jenkins-hbase20:35861] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-13 03:16:55,026 INFO [M:0;jenkins-hbase20:35861] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@2feb7bcb{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-13 03:16:55,027 INFO [M:0;jenkins-hbase20:35861] server.AbstractConnector(383): Stopped ServerConnector@54a66b85{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-13 03:16:55,027 INFO [M:0;jenkins-hbase20:35861] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-13 03:16:55,028 INFO [M:0;jenkins-hbase20:35861] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5ce2c06b{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-13 03:16:55,032 INFO [M:0;jenkins-hbase20:35861] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@31d636bb{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f27a0f06-9cf1-37de-e2c0-e7e0a0f251a8/hadoop.log.dir/,STOPPED} 2023-07-13 03:16:55,033 INFO [M:0;jenkins-hbase20:35861] regionserver.HRegionServer(1144): stopping server jenkins-hbase20.apache.org,35861,1689218210690 2023-07-13 03:16:55,033 INFO [M:0;jenkins-hbase20:35861] regionserver.HRegionServer(1170): stopping server jenkins-hbase20.apache.org,35861,1689218210690; all regions closed. 2023-07-13 03:16:55,033 DEBUG [M:0;jenkins-hbase20:35861] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-13 03:16:55,033 INFO [M:0;jenkins-hbase20:35861] master.HMaster(1491): Stopping master jetty server 2023-07-13 03:16:55,034 INFO [M:0;jenkins-hbase20:35861] server.AbstractConnector(383): Stopped ServerConnector@3c2b629a{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-13 03:16:55,035 DEBUG [M:0;jenkins-hbase20:35861] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-13 03:16:55,035 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-13 03:16:55,035 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1689218211425] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.large.0-1689218211425,5,FailOnTimeoutGroup] 2023-07-13 03:16:55,035 DEBUG [M:0;jenkins-hbase20:35861] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-13 03:16:55,035 DEBUG [master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1689218211426] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase20:0:becomeActiveMaster-HFileCleaner.small.0-1689218211426,5,FailOnTimeoutGroup] 2023-07-13 03:16:55,035 INFO [M:0;jenkins-hbase20:35861] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-13 03:16:55,035 INFO [M:0;jenkins-hbase20:35861] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-13 03:16:55,035 INFO [M:0;jenkins-hbase20:35861] hbase.ChoreService(369): Chore service for: master/jenkins-hbase20:0 had [] on shutdown 2023-07-13 03:16:55,035 DEBUG [M:0;jenkins-hbase20:35861] master.HMaster(1512): Stopping service threads 2023-07-13 03:16:55,035 INFO [M:0;jenkins-hbase20:35861] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-13 03:16:55,035 ERROR [M:0;jenkins-hbase20:35861] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-13 03:16:55,035 INFO [M:0;jenkins-hbase20:35861] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-13 03:16:55,036 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-13 03:16:55,122 DEBUG [Listener at localhost.localdomain/45255-EventThread] zookeeper.ZKWatcher(600): regionserver:34353-0x1008454d7cc0003, quorum=127.0.0.1:57116, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 03:16:55,122 INFO [RS:2;jenkins-hbase20:34353] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,34353,1689218211105; zookeeper connection closed. 2023-07-13 03:16:55,122 DEBUG [Listener at localhost.localdomain/45255-EventThread] zookeeper.ZKWatcher(600): regionserver:34353-0x1008454d7cc0003, quorum=127.0.0.1:57116, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 03:16:55,123 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@2cf0b87a] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@2cf0b87a 2023-07-13 03:16:55,124 DEBUG [Listener at localhost.localdomain/45255-EventThread] zookeeper.ZKWatcher(600): master:35861-0x1008454d7cc0000, quorum=127.0.0.1:57116, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-13 03:16:55,124 DEBUG [Listener at localhost.localdomain/45255-EventThread] zookeeper.ZKWatcher(600): master:35861-0x1008454d7cc0000, quorum=127.0.0.1:57116, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-13 03:16:55,124 INFO [M:0;jenkins-hbase20:35861] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-13 03:16:55,124 INFO [M:0;jenkins-hbase20:35861] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-13 03:16:55,124 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/master already deleted, retry=false 2023-07-13 03:16:55,124 DEBUG [M:0;jenkins-hbase20:35861] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-13 03:16:55,125 INFO [M:0;jenkins-hbase20:35861] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-13 03:16:55,125 DEBUG [M:0;jenkins-hbase20:35861] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-13 03:16:55,125 DEBUG [RegionServerTracker-0] master.ActiveMasterManager(335): master:35861-0x1008454d7cc0000, quorum=127.0.0.1:57116, baseZNode=/hbase Failed delete of our master address node; KeeperErrorCode = NoNode for /hbase/master 2023-07-13 03:16:55,125 DEBUG [M:0;jenkins-hbase20:35861] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-13 03:16:55,125 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:35861-0x1008454d7cc0000, quorum=127.0.0.1:57116, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-13 03:16:55,125 DEBUG [M:0;jenkins-hbase20:35861] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-13 03:16:55,125 INFO [M:0;jenkins-hbase20:35861] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=76.28 KB heapSize=90.73 KB 2023-07-13 03:16:55,140 INFO [M:0;jenkins-hbase20:35861] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=76.28 KB at sequenceid=175 (bloomFilter=true), to=hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/6ff2cd39e8c643be91f0060321391121 2023-07-13 03:16:55,145 DEBUG [M:0;jenkins-hbase20:35861] regionserver.HRegionFileSystem(485): Committing hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/6ff2cd39e8c643be91f0060321391121 as hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/6ff2cd39e8c643be91f0060321391121 2023-07-13 03:16:55,149 INFO [M:0;jenkins-hbase20:35861] regionserver.HStore(1080): Added hdfs://localhost.localdomain:38103/user/jenkins/test-data/5a870d3a-babd-e4f2-81d9-b6208afb22d8/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/6ff2cd39e8c643be91f0060321391121, entries=22, sequenceid=175, filesize=11.1 K 2023-07-13 03:16:55,150 INFO [M:0;jenkins-hbase20:35861] regionserver.HRegion(2948): Finished flush of dataSize ~76.28 KB/78109, heapSize ~90.71 KB/92888, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 25ms, sequenceid=175, compaction requested=false 2023-07-13 03:16:55,152 INFO [M:0;jenkins-hbase20:35861] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-13 03:16:55,152 DEBUG [M:0;jenkins-hbase20:35861] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-13 03:16:55,156 INFO [M:0;jenkins-hbase20:35861] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-13 03:16:55,156 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-13 03:16:55,157 INFO [M:0;jenkins-hbase20:35861] ipc.NettyRpcServer(158): Stopping server on /148.251.75.209:35861 2023-07-13 03:16:55,158 DEBUG [M:0;jenkins-hbase20:35861] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase20.apache.org,35861,1689218210690 already deleted, retry=false 2023-07-13 03:16:55,568 DEBUG [Listener at localhost.localdomain/45255-EventThread] zookeeper.ZKWatcher(600): master:35861-0x1008454d7cc0000, quorum=127.0.0.1:57116, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 03:16:55,568 INFO [M:0;jenkins-hbase20:35861] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,35861,1689218210690; zookeeper connection closed. 2023-07-13 03:16:55,568 DEBUG [Listener at localhost.localdomain/45255-EventThread] zookeeper.ZKWatcher(600): master:35861-0x1008454d7cc0000, quorum=127.0.0.1:57116, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 03:16:55,669 DEBUG [Listener at localhost.localdomain/45255-EventThread] zookeeper.ZKWatcher(600): regionserver:36825-0x1008454d7cc0002, quorum=127.0.0.1:57116, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 03:16:55,669 INFO [RS:1;jenkins-hbase20:36825] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase20.apache.org,36825,1689218210969; zookeeper connection closed. 2023-07-13 03:16:55,669 DEBUG [Listener at localhost.localdomain/45255-EventThread] zookeeper.ZKWatcher(600): regionserver:36825-0x1008454d7cc0002, quorum=127.0.0.1:57116, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-13 03:16:55,669 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@1467885f] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@1467885f 2023-07-13 03:16:55,670 INFO [Listener at localhost.localdomain/45255] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 4 regionserver(s) complete 2023-07-13 03:16:55,670 WARN [Listener at localhost.localdomain/45255] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-13 03:16:55,681 INFO [Listener at localhost.localdomain/45255] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-13 03:16:55,788 WARN [BP-248359729-148.251.75.209-1689218209824 heartbeating to localhost.localdomain/127.0.0.1:38103] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-13 03:16:55,789 WARN [BP-248359729-148.251.75.209-1689218209824 heartbeating to localhost.localdomain/127.0.0.1:38103] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-248359729-148.251.75.209-1689218209824 (Datanode Uuid 0a41fae6-7cb1-4785-bf04-0aa618fdac3d) service to localhost.localdomain/127.0.0.1:38103 2023-07-13 03:16:55,790 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f27a0f06-9cf1-37de-e2c0-e7e0a0f251a8/cluster_e2bfd937-9134-83af-889e-949b5ecd5c75/dfs/data/data5/current/BP-248359729-148.251.75.209-1689218209824] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-13 03:16:55,790 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f27a0f06-9cf1-37de-e2c0-e7e0a0f251a8/cluster_e2bfd937-9134-83af-889e-949b5ecd5c75/dfs/data/data6/current/BP-248359729-148.251.75.209-1689218209824] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-13 03:16:55,793 WARN [Listener at localhost.localdomain/45255] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-13 03:16:55,797 INFO [Listener at localhost.localdomain/45255] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-13 03:16:55,901 WARN [BP-248359729-148.251.75.209-1689218209824 heartbeating to localhost.localdomain/127.0.0.1:38103] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-13 03:16:55,901 WARN [BP-248359729-148.251.75.209-1689218209824 heartbeating to localhost.localdomain/127.0.0.1:38103] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-248359729-148.251.75.209-1689218209824 (Datanode Uuid cacb45dd-0f3b-42ba-9e2d-a12a2c7e8a1d) service to localhost.localdomain/127.0.0.1:38103 2023-07-13 03:16:55,902 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f27a0f06-9cf1-37de-e2c0-e7e0a0f251a8/cluster_e2bfd937-9134-83af-889e-949b5ecd5c75/dfs/data/data3/current/BP-248359729-148.251.75.209-1689218209824] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-13 03:16:55,903 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f27a0f06-9cf1-37de-e2c0-e7e0a0f251a8/cluster_e2bfd937-9134-83af-889e-949b5ecd5c75/dfs/data/data4/current/BP-248359729-148.251.75.209-1689218209824] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-13 03:16:55,905 WARN [Listener at localhost.localdomain/45255] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-13 03:16:55,909 INFO [Listener at localhost.localdomain/45255] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-13 03:16:56,014 WARN [BP-248359729-148.251.75.209-1689218209824 heartbeating to localhost.localdomain/127.0.0.1:38103] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-13 03:16:56,014 WARN [BP-248359729-148.251.75.209-1689218209824 heartbeating to localhost.localdomain/127.0.0.1:38103] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-248359729-148.251.75.209-1689218209824 (Datanode Uuid 046b911f-274d-45d8-907b-3d6d7861bded) service to localhost.localdomain/127.0.0.1:38103 2023-07-13 03:16:56,015 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f27a0f06-9cf1-37de-e2c0-e7e0a0f251a8/cluster_e2bfd937-9134-83af-889e-949b5ecd5c75/dfs/data/data1/current/BP-248359729-148.251.75.209-1689218209824] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-13 03:16:56,015 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f27a0f06-9cf1-37de-e2c0-e7e0a0f251a8/cluster_e2bfd937-9134-83af-889e-949b5ecd5c75/dfs/data/data2/current/BP-248359729-148.251.75.209-1689218209824] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-13 03:16:56,026 INFO [Listener at localhost.localdomain/45255] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost.localdomain:0 2023-07-13 03:16:56,140 INFO [Listener at localhost.localdomain/45255] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-13 03:16:56,169 INFO [Listener at localhost.localdomain/45255] hbase.HBaseTestingUtility(1293): Minicluster is down